Sample records for human visual response

  1. Modulation of visually evoked movement responses in moving virtual environments.

    PubMed

    Reed-Jones, Rebecca J; Vallis, Lori Ann

    2009-01-01

    Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.

  2. Structural and functional correlates of visual field asymmetry in the human brain by diffusion kurtosis MRI and functional MRI.

    PubMed

    O'Connell, Caitlin; Ho, Leon C; Murphy, Matthew C; Conner, Ian P; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C

    2016-11-09

    Human visual performance has been observed to show superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine whether the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI, respectively, in 15 healthy individuals at 3 T. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In diffusion kurtosis MRI, the brain regions mapping to the lower visual field showed higher mean kurtosis, but not fractional anisotropy or mean diffusivity compared with the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing.

  3. The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex.

    PubMed

    Self, Matthew W; Peters, Judith C; Possel, Jessy K; Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C; Roelfsema, Pieter R

    2016-03-01

    Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons' receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex.

  4. The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex

    PubMed Central

    Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C.; Roelfsema, Pieter R.

    2016-01-01

    Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons’ receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex. PMID:27015604

  5. Spatial updating in human parietal cortex

    NASA Technical Reports Server (NTRS)

    Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.

    2003-01-01

    Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.

  6. Structural and Functional Correlates of Visual Field Asymmetry in the Human Brain by Diffusion Kurtosis MRI and Functional MRI

    PubMed Central

    O’Connell, Caitlin; Ho, Leon C.; Murphy, Matthew C.; Conner, Ian P.; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C.

    2016-01-01

    Human visual performance has been observed to exhibit superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine if the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI (DKI), respectively in 15 healthy individuals at 3 Tesla. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In DKI, the brain regions mapping to the lower visual field exhibited higher mean kurtosis but not fractional anisotropy or mean diffusivity when compared to the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing. PMID:27631541

  7. Can responses to basic non-numerical visual features explain neural numerosity responses?

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2017-04-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Vestibular Activation Differentially Modulates Human Early Visual Cortex and V5/MT Excitability and Response Entropy

    PubMed Central

    Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada

    2013-01-01

    Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031

  9. Aversive learning shapes neuronal orientation tuning in human visual cortex.

    PubMed

    McTeague, Lisa M; Gruss, L Forest; Keil, Andreas

    2015-07-28

    The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.

  10. Transient cardio-respiratory responses to visually induced tilt illusions

    NASA Technical Reports Server (NTRS)

    Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.

    2000-01-01

    Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.

  11. Visual Graphics for Human Rights, Social Justice, Democracy and the Public Good

    ERIC Educational Resources Information Center

    Nanackchand, Vedant; Berman, Kim

    2012-01-01

    The value of human rights in a democratic South Africa is constantly threatened and often waived for nefarious reasons. We contend that the use of visual graphics among incoming university visual art students provides a mode of engagement that helps to inculcate awareness of human rights, social responsibility, and the public good in South African…

  12. Altered Evoked Gamma-Band Responses Reveal Impaired Early Visual Processing in ADHD Children

    ERIC Educational Resources Information Center

    Lenz, Daniel; Krauel, Kerstin; Flechtner, Hans-Henning; Schadow, Jeanette; Hinrichs, Hermann; Herrmann, Christoph S.

    2010-01-01

    Neurophysiological studies yield contrary results whether attentional problems of patients with attention-deficit/hyperactivity disorder (ADHD) are related to early visual processing deficits or not. Evoked gamma-band responses (GBRs), being among the first cortical responses occurring as early as 90 ms after visual stimulation in human EEG, have…

  13. Do you see what I see? The difference between dog and human visual perception may affect the outcome of experiments.

    PubMed

    Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András

    2017-07-01

    The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Habituation, Response to Novelty, and Dishabituation in Human Infants: Tests of a Dual-Process Theory of Visual Attention.

    ERIC Educational Resources Information Center

    Kaplan, Peter S.; Werner, John S.

    1986-01-01

    Tests infants' dual-process performance (a process mediating response decrements called habituation and a state-dependent process mediating response increments called sensitization) on visual habituation-dishabituation tasks. (HOD)

  15. Neocortical Rebound Depolarization Enhances Visual Perception

    PubMed Central

    Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji

    2015-01-01

    Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866

  16. Metabolic Mapping of the Brain's Response to Visual Stimulation: Studies in Humans.

    ERIC Educational Resources Information Center

    Phelps, Michael E.; Kuhl, David E.

    1981-01-01

    Studies demonstrate increasing glucose metabolic rates in human primary (PVC) and association (AVC) visual cortex as complexity of visual scenes increase. AVC increased more rapidly with scene complexity than PVC and increased local metabolic activities above control subject with eyes closed; indicates wide range and metabolic reserve of visual…

  17. Dissociable neural responses to hands and non-hand body parts in human left extrastriate visual cortex.

    PubMed

    Bracci, Stefania; Ietswaart, Magdalena; Peelen, Marius V; Cavina-Pratesi, Cristiana

    2010-06-01

    Accumulating evidence points to a map of visual regions encoding specific categories of objects. For example, a region in the human extrastriate visual cortex, the extrastriate body area (EBA), has been implicated in the visual processing of bodies and body parts. Although in the monkey, neurons selective for hands have been reported, in humans it is unclear whether areas selective for individual body parts such as the hand exist. Here, we conducted two functional MRI experiments to test for hand-preferring responses in the human extrastriate visual cortex. We found evidence for a hand-preferring region in left lateral occipitotemporal cortex in all 14 participants. This region, located in the lateral occipital sulcus, partially overlapped with left EBA, but could be functionally and anatomically dissociated from it. In experiment 2, we further investigated the functional profile of hand- and body-preferring regions by measuring responses to hands, fingers, feet, assorted body parts (arms, legs, torsos), and non-biological handlike stimuli such as robotic hands. The hand-preferring region responded most strongly to hands, followed by robotic hands, fingers, and feet, whereas its response to assorted body parts did not significantly differ from baseline. By contrast, EBA responded most strongly to body parts, followed by hands and feet, and did not significantly respond to robotic hands or fingers. Together, these results provide evidence for a representation of the hand in extrastriate visual cortex that is distinct from the representation of other body parts.

  18. Dissociable Neural Responses to Hands and Non-Hand Body Parts in Human Left Extrastriate Visual Cortex

    PubMed Central

    Ietswaart, Magdalena; Peelen, Marius V.; Cavina-Pratesi, Cristiana

    2010-01-01

    Accumulating evidence points to a map of visual regions encoding specific categories of objects. For example, a region in the human extrastriate visual cortex, the extrastriate body area (EBA), has been implicated in the visual processing of bodies and body parts. Although in the monkey, neurons selective for hands have been reported, in humans it is unclear whether areas selective for individual body parts such as the hand exist. Here, we conducted two functional MRI experiments to test for hand-preferring responses in the human extrastriate visual cortex. We found evidence for a hand-preferring region in left lateral occipitotemporal cortex in all 14 participants. This region, located in the lateral occipital sulcus, partially overlapped with left EBA, but could be functionally and anatomically dissociated from it. In experiment 2, we further investigated the functional profile of hand- and body-preferring regions by measuring responses to hands, fingers, feet, assorted body parts (arms, legs, torsos), and non-biological handlike stimuli such as robotic hands. The hand-preferring region responded most strongly to hands, followed by robotic hands, fingers, and feet, whereas its response to assorted body parts did not significantly differ from baseline. By contrast, EBA responded most strongly to body parts, followed by hands and feet, and did not significantly respond to robotic hands or fingers. Together, these results provide evidence for a representation of the hand in extrastriate visual cortex that is distinct from the representation of other body parts. PMID:20393066

  19. Aesthetic Response and Cosmic Aesthetic Distance

    NASA Astrophysics Data System (ADS)

    Madacsi, D.

    2013-04-01

    For Homo sapiens, the experience of a primal aesthetic response to nature was perhaps a necessary precursor to the arousal of an artistic impulse. Among the likely visual candidates for primal initiators of aesthetic response, arguments can be made in favor of the flower, the human face and form, and the sky and light itself as primordial aesthetic stimulants. Although visual perception of the sensory world of flowers and human faces and forms is mediated by light, it was most certainly in the sky that humans first could respond to the beauty of light per se. It is clear that as a species we do not yet identify and comprehend as nature, or part of nature, the entire universe beyond our terrestrial environs, the universe from which we remain inexorably separated by space and time. However, we now enjoy a technologically-enabled opportunity to probe the ultimate limits of visual aesthetic distance and the origins of human aesthetic response as we remotely explore deep space via the Hubble Space Telescope and its successors.

  20. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    PubMed

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. "Visual" Cortex Responds to Spoken Language in Blind Children.

    PubMed

    Bedny, Marina; Richardson, Hilary; Saxe, Rebecca

    2015-08-19

    Plasticity in the visual cortex of blind individuals provides a rare window into the mechanisms of cortical specialization. In the absence of visual input, occipital ("visual") brain regions respond to sound and spoken language. Here, we examined the time course and developmental mechanism of this plasticity in blind children. Nineteen blind and 40 sighted children and adolescents (4-17 years old) listened to stories and two auditory control conditions (unfamiliar foreign speech, and music). We find that "visual" cortices of young blind (but not sighted) children respond to sound. Responses to nonlanguage sounds increased between the ages of 4 and 17. By contrast, occipital responses to spoken language were maximal by age 4 and were not related to Braille learning. These findings suggest that occipital plasticity for spoken language is independent of plasticity for Braille and for sound. We conclude that in the absence of visual input, spoken language colonizes the visual system during brain development. Our findings suggest that early in life, human cortex has a remarkably broad computational capacity. The same cortical tissue can take on visual perception and language functions. Studies of plasticity provide key insights into how experience shapes the human brain. The "visual" cortex of adults who are blind from birth responds to touch, sound, and spoken language. To date, all existing studies have been conducted with adults, so little is known about the developmental trajectory of plasticity. We used fMRI to study the emergence of "visual" cortex responses to sound and spoken language in blind children and adolescents. We find that "visual" cortex responses to sound increase between 4 and 17 years of age. By contrast, responses to spoken language are present by 4 years of age and are not related to Braille-learning. These findings suggest that, early in development, human cortex can take on a strikingly wide range of functions. Copyright © 2015 the authors 0270-6474/15/3511674-08$15.00/0.

  2. Organic light emitting board for dynamic interactive display

    PubMed Central

    Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin

    2017-01-01

    Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications. PMID:28406151

  3. Organic light emitting board for dynamic interactive display

    NASA Astrophysics Data System (ADS)

    Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin

    2017-04-01

    Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications.

  4. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  5. Temporal stability of visually selective responses in intracranial field potentials recorded from human occipital and temporal lobes

    PubMed Central

    Bansal, Arjun K.; Singer, Jedediah M.; Anderson, William S.; Golby, Alexandra; Madsen, Joseph R.

    2012-01-01

    The cerebral cortex needs to maintain information for long time periods while at the same time being capable of learning and adapting to changes. The degree of stability of physiological signals in the human brain in response to external stimuli over temporal scales spanning hours to days remains unclear. Here, we quantitatively assessed the stability across sessions of visually selective intracranial field potentials (IFPs) elicited by brief flashes of visual stimuli presented to 27 subjects. The interval between sessions ranged from hours to multiple days. We considered electrodes that showed robust visual selectivity to different shapes; these electrodes were typically located in the inferior occipital gyrus, the inferior temporal cortex, and the fusiform gyrus. We found that IFP responses showed a strong degree of stability across sessions. This stability was evident in averaged responses as well as single-trial decoding analyses, at the image exemplar level as well as at the category level, across different parts of visual cortex, and for three different visual recognition tasks. These results establish a quantitative evaluation of the degree of stationarity of visually selective IFP responses within and across sessions and provide a baseline for studies of cortical plasticity and for the development of brain-machine interfaces. PMID:22956795

  6. Resolving human object recognition in space and time

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2014-01-01

    A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044

  7. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  8. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  9. Visual and auditory cue integration for the generation of saccadic eye movements in monkeys and lever pressing in humans.

    PubMed

    Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M

    2012-08-01

    This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  10. Paintings, photographs, and computer graphics are calculated appearances

    NASA Astrophysics Data System (ADS)

    McCann, John

    2012-03-01

    Painters reproduce the appearances they see, or visualize. The entire human visual system is the first part of that process, providing extensive spatial processing. Painters have used spatial techniques since the Renaissance to render HDR scenes. Silver halide photography responds to the light falling on single film pixels. Film can only mimic the retinal response of the cones at the start of the visual process. Film cannot mimic the spatial processing in humans. Digital image processing can. This talk studies three dramatic visual illusions and uses the spatial mechanisms found in human vision to interpret their appearances.

  11. Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.

    PubMed

    Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli

    2018-06-08

    Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus

    PubMed Central

    2017-01-01

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553

  13. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    PubMed

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.

  14. Motion Direction Biases and Decoding in Human Visual Cortex

    PubMed Central

    Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297

  15. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  16. Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain.

    PubMed

    Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan

    2015-01-16

    We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.

  17. Human lateral geniculate nucleus and visual cortex respond to screen flicker.

    PubMed

    Krolak-Salmon, Pierre; Hénaff, Marie-Anne; Tallon-Baudry, Catherine; Yvert, Blaise; Guénot, Marc; Vighetto, Alain; Mauguière, François; Bertrand, Olivier

    2003-01-01

    The first electrophysiological study of the human lateral geniculate nucleus (LGN), optic radiation, striate, and extrastriate visual areas is presented in the context of presurgical evaluation of three epileptic patients (Patients 1, 2, and 3). Visual-evoked potentials to pattern reversal and face presentation were recorded with depth intracranial electrodes implanted stereotactically. For Patient 1, electrode anatomical registration, structural magnetic resonance imaging, and electrophysiological responses confirmed the location of two contacts in the geniculate body and one in the optic radiation. The first responses peaked approximately 40 milliseconds in the LGN in Patient 1 and 60 milliseconds in the V1/V2 complex in Patients 2 and 3. Moreover, steady state visual-evoked potentials evoked by the unperceived but commonly experienced video-screen flicker were recorded in the LGN, optic radiation, and V1/V2 visual areas. This study provides topographic and temporal propagation characteristics of steady state visual-evoked potentials along human visual pathways. We discuss the possible relationship between the oscillating signal recorded in subcortical and cortical areas and the electroencephalogram abnormalities observed in patients suffering from photosensitive epilepsy, particularly video-game epilepsy. The consequences of high temporal frequency visual stimuli delivered by ubiquitous video screens on epilepsy, headaches, and eyestrain must be considered.

  18. Perceptual Learning Selectively Refines Orientation Representations in Early Visual Cortex

    PubMed Central

    Jehee, Janneke F.M.; Ling, Sam; Swisher, Jascha D.; van Bergen, Ruben S.; Tong, Frank

    2013-01-01

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily one-hour training sessions. Training on average led to a two-fold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1–V4) using signal detection measures, both pre- and post-training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2–V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information. PMID:23175828

  19. Perceptual learning selectively refines orientation representations in early visual cortex.

    PubMed

    Jehee, Janneke F M; Ling, Sam; Swisher, Jascha D; van Bergen, Ruben S; Tong, Frank

    2012-11-21

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily 1 h training sessions. Training on average led to a twofold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1-V4) using signal detection measures, both before and after training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2-V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information.

  20. Dogs respond appropriately to cues of humans' attentional focus.

    PubMed

    Virányi, Zsófia; Topál, József; Gácsi, Márta; Miklósi, Adám; Csányi, Vilmos

    2004-05-31

    Dogs' ability to recognise cues of human visual attention was studied in different experiments. Study 1 was designed to test the dogs' responsiveness to their owner's tape-recorded verbal commands (Down!) while the Instructor (who was the owner of the dog) was facing either the dog or a human partner or none of them, or was visually separated from the dog. Results show that dogs were more ready to follow the command if the Instructor attended them during instruction compared to situations when the Instructor faced the human partner or was out of sight of the dog. Importantly, however, dogs showed intermediate performance when the Instructor was orienting into 'empty space' during the re-played verbal commands. This suggests that dogs are able to differentiate the focus of human attention. In Study 2 the same dogs were offered the possibility to beg for food from two unfamiliar humans whose visual attention (i.e. facing the dog or turning away) was systematically varied. The dogs' preference for choosing the attentive person shows that dogs are capable of using visual cues of attention to evaluate the human actors' responsiveness to solicit food-sharing. The dogs' ability to understand the communicatory nature of the situations is discussed in terms of their social cognitive skills and unique evolutionary history.

  1. Attention Determines Contextual Enhancement versus Suppression in Human Primary Visual Cortex.

    PubMed

    Flevaris, Anastasia V; Murray, Scott O

    2015-09-02

    Neural responses in primary visual cortex (V1) depend on stimulus context in seemingly complex ways. For example, responses to an oriented stimulus can be suppressed when it is flanked by iso-oriented versus orthogonally oriented stimuli but can also be enhanced when attention is directed to iso-oriented versus orthogonal flanking stimuli. Thus the exact same contextual stimulus arrangement can have completely opposite effects on neural responses-in some cases leading to orientation-tuned suppression and in other cases leading to orientation-tuned enhancement. Here we show that stimulus-based suppression and enhancement of fMRI responses in humans depends on small changes in the focus of attention and can be explained by a model that combines feature-based attention with response normalization. Neurons in the primary visual cortex (V1) respond to stimuli within a restricted portion of the visual field, termed their "receptive field." However, neuronal responses can also be influenced by stimuli that surround a receptive field, although the nature of these contextual interactions and underlying neural mechanisms are debated. Here we show that the response in V1 to a stimulus in the same context can either be suppressed or enhanced depending on the focus of attention. We are able to explain the results using a simple computational model that combines two well established properties of visual cortical responses: response normalization and feature-based enhancement. Copyright © 2015 the authors 0270-6474/15/3512273-08$15.00/0.

  2. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    PubMed Central

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  3. Human infrared vision is triggered by two-photon chromophore isomerization

    PubMed Central

    Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof

    2014-01-01

    Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064

  4. Looking away from faces: influence of high-level visual processes on saccade programming.

    PubMed

    Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika

    2010-03-30

    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

  5. The selectivity of responses to red-green colour and achromatic contrast in the human visual cortex: an fMRI adaptation study.

    PubMed

    Mullen, Kathy T; Chang, Dorita H F; Hess, Robert F

    2015-12-01

    There is controversy as to how responses to colour in the human brain are organized within the visual pathways. A key issue is whether there are modular pathways that respond selectively to colour or whether there are common neural substrates for both colour and achromatic (Ach) contrast. We used functional magnetic resonance imaging (fMRI) adaptation to investigate the responses of early and extrastriate visual areas to colour and Ach contrast. High-contrast red-green (RG) and Ach sinewave rings (0.5 cycles/degree, 2 Hz) were used as both adapting stimuli and test stimuli in a block design. We found robust adaptation to RG or Ach contrast in all visual areas. Cross-adaptation between RG and Ach contrast occurred in all areas indicating the presence of integrated, colour and Ach responses. Notably, we revealed contrasting trends for the two test stimuli. For the RG test, unselective processing (robust adaptation to both RG and Ach contrast) was most evident in the early visual areas (V1 and V2), but selective responses, revealed as greater adaptation between the same stimuli than cross-adaptation between different stimuli, emerged in the ventral cortex, in V4 and VO in particular. For the Ach test, unselective responses were again most evident in early visual areas but Ach selectivity emerged in the dorsal cortex (V3a and hMT+). Our findings support a strong presence of integrated mechanisms for colour and Ach contrast across the visual hierarchy, with a progression towards selective processing in extrastriate visual areas. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Independent effects of motivation and spatial attention in the human visual cortex.

    PubMed

    Bayer, Mareike; Rossi, Valentina; Vanlessen, Naomi; Grass, Annika; Schacht, Annekathrin; Pourtois, Gilles

    2017-01-01

    Motivation and attention constitute major determinants of human perception and action. Nonetheless, it remains a matter of debate whether motivation effects on the visual cortex depend on the spatial attention system, or rely on independent pathways. This study investigated the impact of motivation and spatial attention on the activity of the human primary and extrastriate visual cortex by employing a factorial manipulation of the two factors in a cued pattern discrimination task. During stimulus presentation, we recorded event-related potentials and pupillary responses. Motivational relevance increased the amplitudes of the C1 component at ∼70 ms after stimulus onset. This modulation occurred independently of spatial attention effects, which were evident at the P1 level. Furthermore, motivation and spatial attention had independent effects on preparatory activation as measured by the contingent negative variation; and pupil data showed increased activation in response to incentive targets. Taken together, these findings suggest independent pathways for the influence of motivation and spatial attention on the activity of the human visual cortex. © The Author (2016). Published by Oxford University Press.

  7. The visual white matter: The application of diffusion MRI and fiber tractography to vision science

    PubMed Central

    Rokem, Ariel; Takemura, Hiromasa; Bock, Andrew S.; Scherf, K. Suzanne; Behrmann, Marlene; Wandell, Brian A.; Fine, Ione; Bridge, Holly; Pestilli, Franco

    2017-01-01

    Visual neuroscience has traditionally focused much of its attention on understanding the response properties of single neurons or neuronal ensembles. The visual white matter and the long-range neuronal connections it supports are fundamental in establishing such neuronal response properties and visual function. This review article provides an introduction to measurements and methods to study the human visual white matter using diffusion MRI. These methods allow us to measure the microstructural and macrostructural properties of the white matter in living human individuals; they allow us to trace long-range connections between neurons in different parts of the visual system and to measure the biophysical properties of these connections. We also review a range of findings from recent studies on connections between different visual field maps, the effects of visual impairment on the white matter, and the properties underlying networks that process visual information supporting visual face recognition. Finally, we discuss a few promising directions for future studies. These include new methods for analysis of MRI data, open datasets that are becoming available to study brain connectivity and white matter properties, and open source software for the analysis of these data. PMID:28196374

  8. The Brightness of Colour

    PubMed Central

    Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau

    2009-01-01

    Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398

  9. Induced and evoked neural correlates of orientation selectivity in human visual cortex.

    PubMed

    Koelewijn, Loes; Dumont, Julie R; Muthukumaraswamy, Suresh D; Rich, Anina N; Singh, Krish D

    2011-02-14

    Orientation discrimination is much better for patterns oriented along the horizontal or vertical (cardinal) axes than for patterns oriented obliquely, but the neural basis for this is not known. Previous animal neurophysiology and human neuroimaging studies have demonstrated only a moderate bias for cardinal versus oblique orientations, with fMRI showing a larger response to cardinals in primary visual cortex (V1) and EEG demonstrating both increased magnitudes and reduced latencies of transient evoked responses. Here, using MEG, we localised and characterised induced gamma and transient evoked responses to stationary circular grating patches of three orientations (0, 45, and 90° from vertical). Surprisingly, we found that the sustained gamma response was larger for oblique, compared to cardinal, stimuli. This "inverse oblique effect" was also observed in the earliest (80 ms) evoked response, whereas later responses (120 ms) showed a trend towards the reverse, "classic", oblique response. Source localisation demonstrated that the sustained gamma and early evoked responses were localised to medial visual cortex, whilst the later evoked responses came from both this early visual area and a source in a more inferolateral extrastriate region. These results suggest that (1) the early evoked and sustained gamma responses manifest the initial tuning of V1 neurons, with the stronger response to oblique stimuli possibly reflecting increased tuning widths for these orientations, and (2) the classic behavioural oblique effect is mediated by an extrastriate cortical area and may also implicate feedback from extrastriate to primary visual cortex. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Contributions of visual and embodied expertise to body perception.

    PubMed

    Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D

    2012-01-01

    Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.

  11. Integration of visual and non-visual self-motion cues during voluntary head movements in the human brain.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2018-05-15

    Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Rapid Categorization of Human and Ape Faces in 9-Month-Old Infants Revealed by Fast Periodic Visual Stimulation.

    PubMed

    Peykarjou, Stefanie; Hoehl, Stefanie; Pauen, Sabina; Rossion, Bruno

    2017-10-02

    This study investigates categorization of human and ape faces in 9-month-olds using a Fast Periodic Visual Stimulation (FPVS) paradigm while measuring EEG. Categorization responses are elicited only if infants discriminate between different categories and generalize across exemplars within each category. In study 1, human or ape faces were presented as standard and deviant stimuli in upright and inverted trials. Upright ape faces presented among humans elicited strong categorization responses, whereas responses for upright human faces and for inverted ape faces were smaller. Deviant inverted human faces did not elicit categorization. Data were best explained by a model with main effects of species and orientation. However, variance of low-level image characteristics was higher for the ape than the human category. Variance was matched to replicate this finding in an independent sample (study 2). Both human and ape faces elicited categorization in upright and inverted conditions, but upright ape faces elicited the strongest responses. Again, data were best explained by a model of two main effects. These experiments demonstrate that 9-month-olds rapidly categorize faces, and unfamiliar faces presented among human faces elicit increased categorization responses. This likely reflects habituation for the familiar standard category, and stronger release for the unfamiliar category deviants.

  13. Cross-orientation suppression in human visual cortex

    PubMed Central

    Heeger, David J.

    2011-01-01

    Cross-orientation suppression was measured in human primary visual cortex (V1) to test the normalization model. Subjects viewed vertical target gratings (of varying contrasts) with or without a superimposed horizontal mask grating (fixed contrast). We used functional magnetic resonance imaging (fMRI) to measure the activity in each of several hypothetical channels (corresponding to subpopulations of neurons) with different orientation tunings and fit these orientation-selective responses with the normalization model. For the V1 channel maximally tuned to the target orientation, responses increased with target contrast but were suppressed when the horizontal mask was added, evident as a shift in the contrast gain of this channel's responses. For the channel maximally tuned to the mask orientation, a constant baseline response was evoked for all target contrasts when the mask was absent; responses decreased with increasing target contrast when the mask was present. The normalization model provided a good fit to the contrast-response functions with and without the mask. In a control experiment, the target and mask presentations were temporally interleaved, and we found no shift in contrast gain, i.e., no evidence for suppression. We conclude that the normalization model can explain cross-orientation suppression in human visual cortex. The approach adopted here can be applied broadly to infer, simultaneously, the responses of several subpopulations of neurons in the human brain that span particular stimulus or feature spaces, and characterize their interactions. In addition, it allows us to investigate how stimuli are represented by the inferred activity of entire neural populations. PMID:21775720

  14. Viewing Objects and Planning Actions: On the Potentiation of Grasping Behaviours by Visual Objects

    ERIC Educational Resources Information Center

    Makris, Stergios; Hadar, Aviad A.; Yarrow, Kielan

    2011-01-01

    How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This "affordances" hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common…

  15. Plasticity of the human visual system after retinal gene therapy in patients with Leber’s congenital amaurosis

    PubMed Central

    Ashtari, Manzar; Zhang, Hui; Cook, Philip A.; Cyckowski, Laura L.; Shindler, Kenneth S.; Marshall, Kathleen A.; Aravand, Puya; Vossough, Arastoo; Gee, James C.; Maguire, Albert M.; Baker, Chris I.; Bennett, Jean

    2015-01-01

    Much of our knowledge of the mechanisms underlying plasticity in the visual cortex in response to visual impairment, vision restoration, and environmental interactions comes from animal studies. We evaluated human brain plasticity in a group of patients with Leber’s congenital amaurosis (LCA), who regained vision through gene therapy. Using non-invasive multimodal neuroimaging methods, we demonstrated that reversing blindness with gene therapy promoted long-term structural plasticity in the visual pathways emanating from the treated retina of LCA patients. The data revealed improvements and normalization along the visual fibers corresponding to the site of retinal injection of the gene therapy vector carrying the therapeutic gene in the treated eye compared to the visual pathway for the untreated eye of LCA patients. After gene therapy, the primary visual pathways (for example, geniculostriate fibers) in the treated retina were similar to those of sighted control subjects, whereas the primary visual pathways of the untreated retina continued to deteriorate. Our results suggest that visual experience, enhanced by gene therapy, may be responsible for the reorganization and maturation of synaptic connectivity in the visual pathways of the treated eye in LCA patients. The interactions between the eye and the brain enabled improved and sustained long-term visual function in patients with LCA after gene therapy. PMID:26180100

  16. Unravelling the development of the visual cortex: implications for plasticity and repair

    PubMed Central

    Bourne, James A

    2010-01-01

    The visual cortex comprises over 50 areas in the human, each with a specified role and distinct physiology, connectivity and cellular morphology. How these individual areas emerge during development still remains something of a mystery and, although much attention has been paid to the initial stages of the development of the visual cortex, especially its lamination, very little is known about the mechanisms responsible for the arealization and functional organization of this region of the brain. In recent years we have started to discover that it is the interplay of intrinsic (molecular) and extrinsic (afferent connections) cues that are responsible for the maturation of individual areas, and that there is a spatiotemporal sequence in the maturation of the primary visual cortex (striate cortex, V1) and the multiple extrastriate/association areas. Studies in both humans and non-human primates have started to highlight the specific neural underpinnings responsible for the maturation of the visual cortex, and how experience-dependent plasticity and perturbations to the visual system can impact upon its normal development. Furthermore, damage to specific nuclei of the visual cortex, such as the primary visual cortex (V1), is a common occurrence as a result of a stroke, neurotrauma, disease or hypoxia in both neonates and adults alike. However, the consequences of a focal injury differ between the immature and adult brain, with the immature brain demonstrating a higher level of functional resilience. With better techniques for examining specific molecular and connectional changes, we are now starting to uncover the mechanisms responsible for the increased neural plasticity that leads to significant recovery following injury during this early phase of life. Further advances in our understanding of postnatal development/maturation and plasticity observed during early life could offer new strategies to improve outcomes by recapitulating aspects of the developmental program in the adult brain. PMID:20722872

  17. Attentional load and sensory competition in human vision: modulation of fMRI responses by load at fixation during task-irrelevant stimulation in the peripheral visual field.

    PubMed

    Schwartz, Sophie; Vuilleumier, Patrik; Hutton, Chloe; Maravita, Angelo; Dolan, Raymond J; Driver, Jon

    2005-06-01

    Perceptual suppression of distractors may depend on both endogenous and exogenous factors, such as attentional load of the current task and sensory competition among simultaneous stimuli, respectively. We used functional magnetic resonance imaging (fMRI) to compare these two types of attentional effects and examine how they may interact in the human brain. We varied the attentional load of a visual monitoring task performed on a rapid stream at central fixation without altering the central stimuli themselves, while measuring the impact on fMRI responses to task-irrelevant peripheral checkerboards presented either unilaterally or bilaterally. Activations in visual cortex for irrelevant peripheral stimulation decreased with increasing attentional load at fixation. This relative decrease was present even in V1, but became larger for successive visual areas through to V4. Decreases in activation for contralateral peripheral checkerboards due to higher central load were more pronounced within retinotopic cortex corresponding to 'inner' peripheral locations relatively near the central targets than for more eccentric 'outer' locations, demonstrating a predominant suppression of nearby surround rather than strict 'tunnel vision' during higher task load at central fixation. Contralateral activations for peripheral stimulation in one hemifield were reduced by competition with concurrent stimulation in the other hemifield only in inferior parietal cortex, not in retinotopic areas of occipital visual cortex. In addition, central attentional load interacted with competition due to bilateral versus unilateral peripheral stimuli specifically in posterior parietal and fusiform regions. These results reveal that task-dependent attentional load, and interhemifield stimulus-competition, can produce distinct influences on the neural responses to peripheral visual stimuli within the human visual system. These distinct mechanisms in selective visual processing may be integrated within posterior parietal areas, rather than earlier occipital cortex.

  18. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    PubMed

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Task-dependent enhancement of facial expression and identity representations in human cortex.

    PubMed

    Dobs, Katharina; Schultz, Johannes; Bülthoff, Isabelle; Gardner, Justin L

    2018-05-15

    What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  20. The sequence of cortical activity inferred by response latency variability in the human ventral pathway of face processing.

    PubMed

    Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan

    2018-04-11

    Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.

  1. Close similarity between spatiotemporal frequency tunings of human cortical responses and involuntary manual following responses to visual motion.

    PubMed

    Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki

    2009-02-01

    Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.

  2. Encoding model of temporal processing in human visual cortex.

    PubMed

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  3. Beyond sensory images: Object-based representation in the human ventral pathway

    PubMed Central

    Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.

    2004-01-01

    We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396

  4. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  5. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  6. Re-examining overlap between tactile and visual motion responses within hMT+ and STS

    PubMed Central

    Jiang, Fang; Beauchamp, Michael S.; Fine, Ione

    2015-01-01

    Here we examine overlap between tactile and visual motion BOLD responses within the human MT+ complex. Although several studies have reported tactile responses overlapping with hMT+, many used group average analyses, leaving it unclear whether these responses were restricted to sub-regions of hMT+. Moreover, previous studies either employed a tactile task or passive stimulation, leaving it unclear whether or not tactile responses in hMT+ are simply the consequence of visual imagery. Here we carried out a replication of one of the classic papers finding tactile responses in hMT+ (Hagen et al. 2002). We mapped MT and MST in individual subjects using visual field localizers. We then examined responses to tactile motion on the arm, either presented passively or in the presence of a visual task performed at fixation designed to minimize visualization of the concurrent tactile stimulation. To our surprise, without a visual task, we found only weak tactile motion responses in MT (6% of voxels showing tactile responses) and MST (2% of voxels). With an unrelated visual task designed to withdraw attention from the tactile modality, responses in MST reduced to almost nothing (<1% regions). Consistent with previous results, we did observe tactile responses in STS regions superior and anterior to hMT+. Despite the lack of individual overlap, group averaged responses produced strong spurious overlap between tactile and visual motion responses within hMT+ that resembled those observed in previous studies. The weak nature of tactile responses in hMT+ (and their abolition by withdrawal of attention) suggests that hMT+ may not serve as a supramodal motion processing module. PMID:26123373

  7. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  8. Auditory Detection of the Human Brainstem Auditory Evoked Response.

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.; And Others

    1993-01-01

    This study evaluated whether listeners can distinguish human brainstem auditory evoked responses elicited by acoustic clicks from control waveforms obtained with no acoustic stimulus when the waveforms are presented auditorily. Detection performance for stimuli presented visually was slightly, but consistently, superior to that which occurred for…

  9. Neuronal mechanisms underlying differences in spatial resolution between darks and lights in human vision.

    PubMed

    Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel

    2017-12-01

    Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway.

  10. Neuronal mechanisms underlying differences in spatial resolution between darks and lights in human vision

    PubMed Central

    Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel

    2017-01-01

    Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway. PMID:29196762

  11. Where Similarity Beats Redundancy: The Importance of Context, Higher Order Similarity, and Response Assignment

    ERIC Educational Resources Information Center

    Eidels, Ami; Townsend, James T.; Pomerantz, James R.

    2008-01-01

    People are especially efficient in processing certain visual stimuli such as human faces or good configurations. It has been suggested that topology and geometry play important roles in configural perception. Visual search is one area in which configurality seems to matter. When either of 2 target features leads to a correct response and the…

  12. Directional asymmetries in human smooth pursuit eye movements.

    PubMed

    Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam

    2013-06-27

    Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.

  13. Transient visual responses reset the phase of low-frequency oscillations in the skeletomotor periphery.

    PubMed

    Wood, Daniel K; Gu, Chao; Corneil, Brian D; Gribble, Paul L; Goodale, Melvyn A

    2015-08-01

    We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus-locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low-frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Plasticity of the human visual system after retinal gene therapy in patients with Leber's congenital amaurosis.

    PubMed

    Ashtari, Manzar; Zhang, Hui; Cook, Philip A; Cyckowski, Laura L; Shindler, Kenneth S; Marshall, Kathleen A; Aravand, Puya; Vossough, Arastoo; Gee, James C; Maguire, Albert M; Baker, Chris I; Bennett, Jean

    2015-07-15

    Much of our knowledge of the mechanisms underlying plasticity in the visual cortex in response to visual impairment, vision restoration, and environmental interactions comes from animal studies. We evaluated human brain plasticity in a group of patients with Leber's congenital amaurosis (LCA), who regained vision through gene therapy. Using non-invasive multimodal neuroimaging methods, we demonstrated that reversing blindness with gene therapy promoted long-term structural plasticity in the visual pathways emanating from the treated retina of LCA patients. The data revealed improvements and normalization along the visual fibers corresponding to the site of retinal injection of the gene therapy vector carrying the therapeutic gene in the treated eye compared to the visual pathway for the untreated eye of LCA patients. After gene therapy, the primary visual pathways (for example, geniculostriate fibers) in the treated retina were similar to those of sighted control subjects, whereas the primary visual pathways of the untreated retina continued to deteriorate. Our results suggest that visual experience, enhanced by gene therapy, may be responsible for the reorganization and maturation of synaptic connectivity in the visual pathways of the treated eye in LCA patients. The interactions between the eye and the brain enabled improved and sustained long-term visual function in patients with LCA after gene therapy. Copyright © 2015, American Association for the Advancement of Science.

  15. Keith Haring, Felix Gonzalez-Torres, Wolfgang Tillmans, and the AIDS Epidemic: The Use of Visual Art in a Health Humanities Course.

    PubMed

    Smith, Jason A

    2018-02-23

    Contemporary art can be a powerful pedagogical tool in the health humanities. Students in an undergraduate course in the health humanities explore the subjective experience of illness and develop their empathy by studying three artists in the context of the AIDS epidemic: Keith Haring, Felix Gonzalez-Torres, and Wolfgang Tillmans. Using assignments based in narrative pedagogy, students expand their empathic response to pain and suffering. The role of visual art in health humanities pedagogy is discussed.

  16. First comparative approach to touchscreen-based visual object-location paired-associates learning in humans (Homo sapiens) and a nonhuman primate (Microcebus murinus).

    PubMed

    Schmidtke, Daniel; Ammersdörfer, Sandra; Joly, Marine; Zimmermann, Elke

    2018-05-10

    A recent study suggests that a specific, touchscreen-based task on visual object-location paired-associates learning (PAL), the so-called Different PAL (dPAL) task, allows effective translation from animal models to humans. Here, we adapted the task to a nonhuman primate (NHP), the gray mouse lemur, and provide first evidence for the successful comparative application of the task to humans and NHPs. Young human adults reach the learning criterion after considerably less sessions (one order of magnitude) than young, adult NHPs, which is likely due to faster and voluntary rejection of ineffective learning strategies in humans and almost immediate rule generalization. At criterion, however, all human subjects solved the task by either applying a visuospatial rule or, more rarely, by memorizing all possible stimulus combinations and responding correctly based on global visual information. An error-profile analysis in humans and NHPs suggests that successful learning in NHPs is comparably based either on the formation of visuospatial associative links or on more reflexive, visually guided stimulus-response learning. The classification in the NHPs is further supported by an analysis of the individual response latencies, which are considerably higher in NHPs classified as spatial learners. Our results, therefore, support the high translational potential of the standardized, touchscreen-based dPAL task by providing first empirical and comparable evidence for two different cognitive processes underlying dPAL performance in primates. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. High-dynamic-range scene compression in humans

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2006-02-01

    Single pixel dynamic-range compression alters a particular input value to a unique output value - a look-up table. It is used in chemical and most digital photographic systems having S-shaped transforms to render high-range scenes onto low-range media. Post-receptor neural processing is spatial, as shown by the physiological experiments of Dowling, Barlow, Kuffler, and Hubel & Wiesel. Human vision does not render a particular receptor-quanta catch as a unique response. Instead, because of spatial processing, the response to a particular quanta catch can be any color. Visual response is scene dependent. Stockham proposed an approach to model human range compression using low-spatial frequency filters. Campbell, Ginsberg, Wilson, Watson, Daly and many others have developed spatial-frequency channel models. This paper describes experiments measuring the properties of desirable spatial-frequency filters for a variety of scenes. Given the radiances of each pixel in the scene and the observed appearances of objects in the image, one can calculate the visual mask for that individual image. Here, visual mask is the spatial pattern of changes made by the visual system in processing the input image. It is the spatial signature of human vision. Low-dynamic range images with many white areas need no spatial filtering. High-dynamic-range images with many blacks, or deep shadows, require strong spatial filtering. Sun on the right and shade on the left requires directional filters. These experiments show that variable scene- scenedependent filters are necessary to mimic human vision. Although spatial-frequency filters can model human dependent appearances, the problem still remains that an analysis of the scene is still needed to calculate the scene-dependent strengths of each of the filters for each frequency.

  18. The visual accommodation response during concurrent mental activity

    NASA Technical Reports Server (NTRS)

    Malmstrom, F. V.; Randle, R. J.; Bendix, J. S.; Weber, R. J.

    1980-01-01

    The direction and magnitude of the human visual accommodation response during concurrent mental activity are investigated. Subject focusing responses to targets at distances of 0.0 D, 3.0 D and an indeterminate distance were monitored by means of an optometer during the performance of a backwards counting task and a visual imagery task (thinking near and thinking far). In both experiments a shift in accommodation towards the visual far point is observed particularly for the near target, which increases with the duration of the task. The results can be interpreted in terms of both the capacity model of Kahneman (1973) and the autonomic arousal model of Hess and Polt (1964), and are not inconsistent with the possibility of an intermediate resting position.

  19. Mechanisms of migraine aura revealed by functional MRI in human visual cortex

    PubMed Central

    Hadjikhani, Nouchine; Sanchez del Rio, Margarita; Wu, Ona; Schwartz, Denis; Bakker, Dick; Fischl, Bruce; Kwong, Kenneth K.; Cutrer, F. Michael; Rosen, Bruce R.; Tootell, Roger B. H.; Sorensen, A. Gregory; Moskowitz, Michael A.

    2001-01-01

    Cortical spreading depression (CSD) has been suggested to underlie migraine visual aura. However, it has been challenging to test this hypothesis in human cerebral cortex. Using high-field functional MRI with near-continuous recording during visual aura in three subjects, we observed blood oxygenation level-dependent (BOLD) signal changes that demonstrated at least eight characteristics of CSD, time-locked to percept/onset of the aura. Initially, a focal increase in BOLD signal (possibly reflecting vasodilation), developed within extrastriate cortex (area V3A). This BOLD change progressed contiguously and slowly (3.5 ± 1.1 mm/min) over occipital cortex, congruent with the retinotopy of the visual percept. Following the same retinotopic progression, the BOLD signal then diminished (possibly reflecting vasoconstriction after the initial vasodilation), as did the BOLD response to visual activation. During periods with no visual stimulation, but while the subject was experiencing scintillations, BOLD signal followed the retinotopic progression of the visual percept. These data strongly suggest that an electrophysiological event such as CSD generates the aura in human visual cortex. PMID:11287655

  20. A Novel Ex Vivo Method for Visualizing Live-Cell Calcium Response Behavior in Intact Human Tumors.

    PubMed

    Koh, James; Hogue, Joyce A; Sosa, Julie A

    2016-01-01

    The functional impact of intratumoral heterogeneity has been difficult to assess in the absence of a means to interrogate dynamic, live-cell biochemical events in the native tissue context of a human tumor. Conventional histological methods can reveal morphology and static biomarker expression patterns but do not provide a means to probe and evaluate tumor functional behavior and live-cell responsiveness to experimentally controlled stimuli. Here, we describe an approach that couples vibratome-mediated viable tissue sectioning with live-cell confocal microscopy imaging to visualize human parathyroid adenoma tumor cell responsiveness to extracellular calcium challenge. Tumor sections prepared as 300 micron-thick tissue slices retain viability throughout a >24 hour observation period and retain the native architecture of the parental tumor. Live-cell observation of biochemical signaling in response to extracellular calcium challenge in the intact tissue slices reveals discrete, heterogeneous kinetic waveform categories of calcium agonist reactivity within each tumor. Plotting the proportion of maximally responsive tumor cells as a function of calcium concentration yields a sigmoid dose-response curve with a calculated calcium EC50 value significantly elevated above published reference values for wild-type calcium-sensing receptor (CASR) sensitivity. Subsequent fixation and immunofluorescence analysis of the functionally evaluated tissue specimens allows alignment and mapping of the physical characteristics of individual cells within the tumor to specific calcium response behaviors. Evaluation of the relative abundance of intracellular PTH in tissue slices challenged with variable calcium concentrations demonstrates that production of the hormone can be dynamically manipulated ex vivo. The capability of visualizing live human tumor tissue behavior in response to experimentally controlled conditions opens a wide range of possibilities for personalized ex vivo therapeutic testing. This highly adaptable system provides a unique platform for live-cell ex vivo provocative testing of human tumor responsiveness to a range of physiological agonists or candidate therapeutic compounds.

  1. Explaining neural signals in human visual cortex with an associative learning model.

    PubMed

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  2. Neural Responses to Visual Food Cues According to Weight Status: A Systematic Review of Functional Magnetic Resonance Imaging Studies

    PubMed Central

    Pursey, Kirrilly M.; Stanwell, Peter; Callister, Robert J.; Brain, Katherine; Collins, Clare E.; Burrows, Tracy L.

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies. PMID:25988110

  3. Neural responses to visual food cues according to weight status: a systematic review of functional magnetic resonance imaging studies.

    PubMed

    Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.

  4. Additive effects of affective arousal and top-down attention on the event-related brain responses to human bodies.

    PubMed

    Hietanen, Jari K; Kirjavainen, Ilkka; Nummenmaa, Lauri

    2014-12-01

    The early visual event-related 'N170 response' is sensitive to human body configuration and it is enhanced to nude versus clothed bodies. We tested whether the N170 response as well as later EPN and P3/LPP responses to nude bodies reflect the effect of increased arousal elicited by these stimuli, or top-down allocation of object-based attention to the nude bodies. Participants saw pictures of clothed and nude bodies and faces. In each block, participants were asked to direct their attention towards stimuli from a specified target category while ignoring others. Object-based attention did not modulate the N170 amplitudes towards attended stimuli; instead N170 response was larger to nude bodies compared to stimuli from other categories. Top-down attention and affective arousal had additive effects on the EPN and P3/LPP responses reflecting later processing stages. We conclude that nude human bodies have a privileged status in the visual processing system due to the affective arousal they trigger. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Fostering Kinship with Animals: Animal Portraiture in Humane Education

    ERIC Educational Resources Information Center

    Kalof, Linda; Zammit-Lucia, Joe; Bell, Jessica; Granter, Gina

    2016-01-01

    Visual depictions of animals can alter human perceptions of, emotional responses to, and attitudes toward animals. Our study addressed the potential of a slideshow designed to activate emotional responses to animals to foster feelings of kinship with them. The personal meaning map measured changes in perceptions of animals. The participants were…

  6. The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.

    2008-01-01

    Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150

  7. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli

    PubMed Central

    Störmer, Viola S.; McDonald, John J.; Hillyard, Steven A.

    2009-01-01

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex. PMID:20007778

  8. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli.

    PubMed

    Störmer, Viola S; McDonald, John J; Hillyard, Steven A

    2009-12-29

    The question of whether attention makes sensory impressions appear more intense has been a matter of debate for over a century. Recent psychophysical studies have reported that attention increases apparent contrast of visual stimuli, but the issue continues to be debated. We obtained converging neurophysiological evidence from human observers as they judged the relative contrast of visual stimuli presented to the left and right visual fields following a lateralized auditory cue. Cross-modal cueing of attention boosted the apparent contrast of the visual target in association with an enlarged neural response in the contralateral visual cortex that began within 100 ms after target onset. The magnitude of the enhanced neural response was positively correlated with perceptual reports of the cued target being higher in contrast. The results suggest that attention increases the perceived contrast of visual stimuli by boosting early sensory processing in the visual cortex.

  9. Changing the Spatial Scope of Attention Alters Patterns of Neural Gain in Human Cortex

    PubMed Central

    Garcia, Javier O.; Rungratsameetaweemana, Nuttida; Sprague, Thomas C.

    2014-01-01

    Over the last several decades, spatial attention has been shown to influence the activity of neurons in visual cortex in various ways. These conflicting observations have inspired competing models to account for the influence of attention on perception and behavior. Here, we used electroencephalography (EEG) to assess steady-state visual evoked potentials (SSVEP) in human subjects and showed that highly focused spatial attention primarily enhanced neural responses to high-contrast stimuli (response gain), whereas distributed attention primarily enhanced responses to medium-contrast stimuli (contrast gain). Together, these data suggest that different patterns of neural modulation do not reflect fundamentally different neural mechanisms, but instead reflect changes in the spatial extent of attention. PMID:24381272

  10. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    PubMed

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  11. Retinotopic Maps, Spatial Tuning, and Locations of Human Visual Areas in Surface Coordinates Characterized with Multifocal and Blocked fMRI Designs

    PubMed Central

    Henriksson, Linda; Karvonen, Juha; Salminen-Vaparanta, Niina; Railo, Henry; Vanni, Simo

    2012-01-01

    The localization of visual areas in the human cortex is typically based on mapping the retinotopic organization with functional magnetic resonance imaging (fMRI). The most common approach is to encode the response phase for a slowly moving visual stimulus and to present the result on an individual's reconstructed cortical surface. The main aims of this study were to develop complementary general linear model (GLM)-based retinotopic mapping methods and to characterize the inter-individual variability of the visual area positions on the cortical surface. We studied 15 subjects with two methods: a 24-region multifocal checkerboard stimulus and a blocked presentation of object stimuli at different visual field locations. The retinotopic maps were based on weighted averaging of the GLM parameter estimates for the stimulus regions. In addition to localizing visual areas, both methods could be used to localize multiple retinotopic regions-of-interest. The two methods yielded consistent retinotopic maps in the visual areas V1, V2, V3, hV4, and V3AB. In the higher-level areas IPS0, VO1, LO1, LO2, TO1, and TO2, retinotopy could only be mapped with the blocked stimulus presentation. The gradual widening of spatial tuning and an increase in the responses to stimuli in the ipsilateral visual field along the hierarchy of visual areas likely reflected the increase in the average receptive field size. Finally, after registration to Freesurfer's surface-based atlas of the human cerebral cortex, we calculated the mean and variability of the visual area positions in the spherical surface-based coordinate system and generated probability maps of the visual areas on the average cortical surface. The inter-individual variability in the area locations decreased when the midpoints were calculated along the spherical cortical surface compared with volumetric coordinates. These results can facilitate both analysis of individual functional anatomy and comparisons of visual cortex topology across studies. PMID:22590626

  12. Perceptual learning increases the strength of the earliest signals in visual cortex.

    PubMed

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  13. Wireless physiological monitoring and ocular tracking: 3D calibration in a fully-immersive virtual health care environment.

    PubMed

    Zhang, Lelin; Chi, Yu Mike; Edelstein, Eve; Schulze, Jurgen; Gramann, Klaus; Velasquez, Alvaro; Cauwenberghs, Gert; Macagno, Eduardo

    2010-01-01

    Wireless physiological/neurological monitoring in virtual reality (VR) offers a unique opportunity for unobtrusively quantifying human responses to precisely controlled and readily modulated VR representations of health care environments. Here we present such a wireless, light-weight head-mounted system for measuring electrooculogram (EOG) and electroencephalogram (EEG) activity in human subjects interacting with and navigating in the Calit2 StarCAVE, a five-sided immersive 3-D visualization VR environment. The system can be easily expanded to include other measurements, such as cardiac activity and galvanic skin responses. We demonstrate the capacity of the system to track focus of gaze in 3-D and report a novel calibration procedure for estimating eye movements from responses to the presentation of a set of dynamic visual cues in the StarCAVE. We discuss cyber and clinical applications that include a 3-D cursor for visual navigation in VR interactive environments, and the monitoring of neurological and ocular dysfunction in vision/attention disorders.

  14. Human comfort response to random motions with a dominant vertical motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on the Langley Visual Motion Simulator with vertical acceleration inputs with various power spectra shapes and magnitudes. The data obtained are presented.

  15. fMRI mapping of the visual system in the mouse brain with interleaved snapshot GE-EPI.

    PubMed

    Niranjan, Arun; Christie, Isabel N; Solomon, Samuel G; Wells, Jack A; Lythgoe, Mark F

    2016-10-01

    The use of functional magnetic resonance imaging (fMRI) in mice is increasingly prevalent, providing a means to non-invasively characterise functional abnormalities associated with genetic models of human diseases. The predominant stimulus used in task-based fMRI in the mouse is electrical stimulation of the paw. Task-based fMRI in mice using visual stimuli remains underexplored, despite visual stimuli being common in human fMRI studies. In this study, we map the mouse brain visual system with BOLD measurements at 9.4T using flashing light stimuli with medetomidine anaesthesia. BOLD responses were observed in the lateral geniculate nucleus, the superior colliculus and the primary visual area of the cortex, and were modulated by the flashing frequency, diffuse vs focussed light and stimulus context. Negative BOLD responses were measured in the visual cortex at 10Hz flashing frequency; but turned positive below 5Hz. In addition, the use of interleaved snapshot GE-EPI improved fMRI image quality without diminishing the temporal contrast-noise-ratio. Taken together, this work demonstrates a novel methodological protocol in which the mouse brain visual system can be non-invasively investigated using BOLD fMRI. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. The Role of Visual and Semantic Properties in the Emergence of Category-Specific Patterns of Neural Response in the Human Brain.

    PubMed

    Coggan, David D; Baker, Daniel H; Andrews, Timothy J

    2016-01-01

    Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged ∼80-100 ms after stimulus onset and were still evident when the stimulus was no longer present (∼800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged ∼80-100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (∼400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image.

  17. Astronomy and Sodium Lighting,

    DTIC Science & Technology

    1984-02-01

    55 REFERENCES...........................................................57 - ix - FIGURES 1. Wavelength response of the human...34 9. Retail Prices for the Specified Energy Consumption and Demand of Electricity in Selected Cities, February 1982, 0 1981 (Cents per Kilowatt Hour...555 nm and operate at approximately 2700 0K. In Fig. 1, we show the spectrum of a typical incandescent lamp, together with the human visual response

  18. Toward statistical modeling of saccadic eye-movement and visual saliency.

    PubMed

    Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming

    2014-11-01

    In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.

  19. Visual exploration and analysis of human-robot interaction rules

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.

  20. Focal damage to macaque photoreceptors produces persistent visual loss

    PubMed Central

    Strazzeri, Jennifer M.; Hunter, Jennifer J.; Masella, Benjamin D.; Yin, Lu; Fischer, William S.; DiLoreto, David A.; Libby, Richard T.; Williams, David R.; Merigan, William H.

    2014-01-01

    Insertion of light-gated channels into inner retina neurons restores neural light responses, light evoked potentials, visual optomotor responses and visually-guided maze behavior in mice blinded by retinal degeneration. This method of vision restoration bypasses damaged outer retina, providing stimulation directly to retinal ganglion cells in inner retina. The approach is similar to that of electronic visual protheses, but may offer some advantages, such as avoidance of complex surgery and direct targeting of many thousands of neurons. However, the promise of this technique for restoring human vision remains uncertain because rodent animal models, in which it has been largely developed, are not ideal for evaluating visual perception. On the other hand, psychophysical vision studies in macaque can be used to evaluate different approaches to vision restoration in humans. Furthermore, it has not been possible to test vision restoration in macaques, the optimal model for human-like vision, because there has been no macaque model of outer retina degeneration. In this study, we describe development of a macaque model of photoreceptor degeneration that can in future studies be used to test restoration of perception by visual prostheses. Our results show that perceptual deficits caused by focal light damage are restricted to locations at which photoreceptors are damaged, that optical coherence tomography (OCT) can be used to track such lesions, and that adaptive optics retinal imaging, which we recently used for in vivo recording of ganglion cell function, can be used in future studies to examine these lesions. PMID:24316158

  1. Attention reduces spatial uncertainty in human ventral temporal cortex.

    PubMed

    Kay, Kendrick N; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-03-02

    Ventral temporal cortex (VTC) is the latest stage of the ventral "what" visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3-5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal "where" visual pathway [6-10]. Here, we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Attention reduces spatial uncertainty in human ventral temporal cortex

    PubMed Central

    Kay, Kendrick N.; Weiner, Kevin S.; Grill-Spector, Kalanit

    2014-01-01

    SUMMARY Ventral temporal cortex (VTC) is the latest stage of the ventral ‘what’ visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3–5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal ‘where’ visual pathway [6–10]. Here we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. PMID:25702580

  3. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  4. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    PubMed

    Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I

    2017-06-01

    The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.

  5. Functional Imaging of Audio–Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences?

    PubMed Central

    Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.

    2017-01-01

    Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201

  6. Timing of target discrimination in human frontal eye fields.

    PubMed

    O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent

    2004-01-01

    Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.

  7. Perceptual and Physiological Responses to Jackson Pollock's Fractals

    PubMed Central

    Taylor, Richard P.; Spehar, Branka; Van Donkelaar, Paul; Hagerhall, Caroline M.

    2011-01-01

    Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility – are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns. PMID:21734876

  8. The coupling of cerebral blood flow and oxygen metabolism with brain activation is similar for simple and complex stimuli in human primary visual cortex.

    PubMed

    Griffeth, Valerie E M; Simon, Aaron B; Buxton, Richard B

    2015-01-01

    Quantitative functional MRI (fMRI) experiments to measure blood flow and oxygen metabolism coupling in the brain typically rely on simple repetitive stimuli. Here we compared such stimuli with a more naturalistic stimulus. Previous work on the primary visual cortex showed that direct attentional modulation evokes a blood flow (CBF) response with a relatively large oxygen metabolism (CMRO2) response in comparison to an unattended stimulus, which evokes a much smaller metabolic response relative to the flow response. We hypothesized that a similar effect would be associated with a more engaging stimulus, and tested this by measuring the primary human visual cortex response to two contrast levels of a radial flickering checkerboard in comparison to the response to free viewing of brief movie clips. We did not find a significant difference in the blood flow-metabolism coupling (n=%ΔCBF/%ΔCMRO2) between the movie stimulus and the flickering checkerboards employing two different analysis methods: a standard analysis using the Davis model and a new analysis using a heuristic model dependent only on measured quantities. This finding suggests that in the primary visual cortex a naturalistic stimulus (in comparison to a simple repetitive stimulus) is either not sufficient to provoke a change in flow-metabolism coupling by attentional modulation as hypothesized, that the experimental design disrupted the cognitive processes underlying the response to a more natural stimulus, or that the technique used is not sensitive enough to detect a small difference. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Multiscale neural connectivity during human sensory processing in the brain

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir A.; Runnova, Anastasia E.; Frolov, Nikita S.; Makarov, Vladimir V.; Nedaivozov, Vladimir; Koronovskii, Alexey A.; Pisarchik, Alexander; Hramov, Alexander E.

    2018-05-01

    Stimulus-related brain activity is considered using wavelet-based analysis of neural interactions between occipital and parietal brain areas in alpha (8-12 Hz) and beta (15-30 Hz) frequency bands. We show that human sensory processing related to the visual stimuli perception induces brain response resulted in different ways of parieto-occipital interactions in these bands. In the alpha frequency band the parieto-occipital neuronal network is characterized by homogeneous increase of the interaction between all interconnected areas both within occipital and parietal lobes and between them. In the beta frequency band the occipital lobe starts to play a leading role in the dynamics of the occipital-parietal network: The perception of visual stimuli excites the visual center in the occipital area and then, due to the increase of parieto-occipital interactions, such excitation is transferred to the parietal area, where the attentional center takes place. In the case when stimuli are characterized by a high degree of ambiguity, we find greater increase of the interaction between interconnected areas in the parietal lobe due to the increase of human attention. Based on revealed mechanisms, we describe the complex response of the parieto-occipital brain neuronal network during the perception and primary processing of the visual stimuli. The results can serve as an essential complement to the existing theory of neural aspects of visual stimuli processing.

  10. Visual preference in a human-reared agile gibbon (Hylobates agilis).

    PubMed

    Tanaka, Masayuki; Uchikoshi, Makiko

    2010-01-01

    Visual preference was evaluated in a male agile gibbon. The subject was raised by humans immediately after birth, but lived with his biological family from one year of age. Visual preference was assessed using a free-choice task in which five or six photographs of different primate species, including humans, were presented on a touch-sensitive screen. The subject touched one of them. Food rewards were delivered irrespective of the subject's responses. We prepared two types of stimulus sets. With set 1, the subject touched photographs of humans more frequently than those of other species, recalling previous findings in human-reared chimpanzees. With set 2, photographs of nine species of gibbons were presented. Chimpanzees touched photographs of white-handed gibbons more than those of other gibbon species. The gibbon subject initially touched photographs of agile gibbons more than white-handed gibbons, but after one and two years his choice patterns resembled the chimpanzees'. The results suggest that, as in chimpanzees, visual preferences of agile gibbons are not genetically programmed but develop through social experience during infancy.

  11. Pennsylvania Classroom Guide to Safety in the Visual Arts.

    ERIC Educational Resources Information Center

    Oltman, Debra L.

    Exposure to certain art materials can damage the human body. Some of these materials are identified together with factors that influence exposure, including duration, frequency, and environmental conditions. Responsibility for providing a safe working environment for the creation of visual arts in the classroom lies with the instructor, principal,…

  12. The role of early visual cortex in visual short-term memory and visual attention.

    PubMed

    Offen, Shani; Schluppeck, Denis; Heeger, David J

    2009-06-01

    We measured cortical activity with functional magnetic resonance imaging to probe the involvement of early visual cortex in visual short-term memory and visual attention. In four experimental tasks, human subjects viewed two visual stimuli separated by a variable delay period. The tasks placed differential demands on short-term memory and attention, but the stimuli were visually identical until after the delay period. Early visual cortex exhibited sustained responses throughout the delay when subjects performed attention-demanding tasks, but delay-period activity was not distinguishable from zero when subjects performed a task that required short-term memory. This dissociation reveals different computational mechanisms underlying the two processes.

  13. Pulvinar neurons reveal neurobiological evidence of past selection for rapid detection of snakes.

    PubMed

    Van Le, Quan; Isbell, Lynne A; Matsumoto, Jumpei; Nguyen, Minh; Hori, Etsuro; Maior, Rafael S; Tomaz, Carlos; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao

    2013-11-19

    Snakes and their relationships with humans and other primates have attracted broad attention from multiple fields of study, but not, surprisingly, from neuroscience, despite the involvement of the visual system and strong behavioral and physiological evidence that humans and other primates can detect snakes faster than innocuous objects. Here, we report the existence of neurons in the primate medial and dorsolateral pulvinar that respond selectively to visual images of snakes. Compared with three other categories of stimuli (monkey faces, monkey hands, and geometrical shapes), snakes elicited the strongest, fastest responses, and the responses were not reduced by low spatial filtering. These findings integrate neuroscience with evolutionary biology, anthropology, psychology, herpetology, and primatology by identifying a neurobiological basis for primates' heightened visual sensitivity to snakes, and adding a crucial component to the growing evolutionary perspective that snakes have long shaped our primate lineage.

  14. Pulvinar neurons reveal neurobiological evidence of past selection for rapid detection of snakes

    PubMed Central

    Van Le, Quan; Isbell, Lynne A.; Matsumoto, Jumpei; Nguyen, Minh; Hori, Etsuro; Maior, Rafael S.; Tomaz, Carlos; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    Snakes and their relationships with humans and other primates have attracted broad attention from multiple fields of study, but not, surprisingly, from neuroscience, despite the involvement of the visual system and strong behavioral and physiological evidence that humans and other primates can detect snakes faster than innocuous objects. Here, we report the existence of neurons in the primate medial and dorsolateral pulvinar that respond selectively to visual images of snakes. Compared with three other categories of stimuli (monkey faces, monkey hands, and geometrical shapes), snakes elicited the strongest, fastest responses, and the responses were not reduced by low spatial filtering. These findings integrate neuroscience with evolutionary biology, anthropology, psychology, herpetology, and primatology by identifying a neurobiological basis for primates’ heightened visual sensitivity to snakes, and adding a crucial component to the growing evolutionary perspective that snakes have long shaped our primate lineage. PMID:24167268

  15. Lateralization of the human mirror neuron system.

    PubMed

    Aziz-Zadeh, Lisa; Koski, Lisa; Zaidel, Eran; Mazziotta, John; Iacoboni, Marco

    2006-03-15

    A cortical network consisting of the inferior frontal, rostral inferior parietal, and posterior superior temporal cortices has been implicated in representing actions in the primate brain and is critical to imitation in humans. This neural circuitry may be an evolutionary precursor of neural systems associated with language. However, language is predominantly lateralized to the left hemisphere, whereas the degree of lateralization of the imitation circuitry in humans is unclear. We conducted a functional magnetic resonance imaging study of imitation of finger movements with lateralized stimuli and responses. During imitation, activity in the inferior frontal and rostral inferior parietal cortex, although fairly bilateral, was stronger in the hemisphere ipsilateral to the visual stimulus and response hand. This ipsilateral pattern is at variance with the typical contralateral activity of primary visual and motor areas. Reliably increased signal in the right superior temporal sulcus (STS) was observed for both left-sided and right-sided imitation tasks, although subthreshold activity was also observed in the left STS. Overall, the data indicate that visual and motor components of the human mirror system are not left-lateralized. The left hemisphere superiority for language, then, must be have been favored by other types of language precursors, perhaps auditory or multimodal action representations.

  16. Visualization and Rule Validation in Human-Behavior Representation

    ERIC Educational Resources Information Center

    Moya, Lisa Jean; McKenzie, Frederic D.; Nguyen, Quynh-Anh H.

    2008-01-01

    Human behavior representation (HBR) models simulate human behaviors and responses. The Joint Crowd Federate [TM] cognitive model developed by the Virginia Modeling, Analysis, and Simulation Center (VMASC) and licensed by WernerAnderson, Inc., models the cognitive behavior of crowds to provide credible crowd behavior in support of military…

  17. The Uncanny Valley Does Not Interfere with Level 1 Visual Perspective Taking

    PubMed Central

    MacDorman, Karl F.; Srinivas, Preethi; Patel, Himalaya

    2014-01-01

    When a computer-animated human character looks eerily realistic, viewers report a loss of empathy; they have difficulty taking the character’s perspective. To explain this perspective-taking impairment, known as the uncanny valley, a novel theory is proposed: The more human or less eerie a character looks, the more it interferes with level 1 visual perspective taking when the character’s perspective differs from that of the human observer (e.g., because the character competitively activates shared circuits in the observer’s brain). The proposed theory is evaluated in three experiments involving a dot-counting task in which participants either assumed or ignored the perspective of characters varying in their human photorealism and eeriness. Although response times and error rates were lower when the number of dots faced by the observer and character were the same (congruent condition) than when they were different (incongruent condition), no consistent pattern emerged between the human photorealism or eeriness of the characters and participants’ response times and error rates. Thus, the proposed theory is unsupported for level 1 visual perspective taking. As the effects of the uncanny valley on empathy have not previously been investigated systematically, these results provide evidence to eliminate one possible explanation. PMID:25221383

  18. Dystrophin Is Required for Proper Functioning of Luminance and Red-Green Cone Opponent Mechanisms in the Human Retina.

    PubMed

    Barboni, Mirella Telles Salgueiro; Martins, Cristiane Maria Gomes; Nagy, Balázs Vince; Tsai, Tina; Damico, Francisco Max; da Costa, Marcelo Fernandes; de Cassia, Rita; Pavanello, M; Lourenço, Naila Cristina Vilaça; de Cerqueira, Antonia Maria Pereira; Zatz, Mayana; Kremers, Jan; Ventura, Dora Fix

    2016-07-01

    Visual information is processed in parallel pathways in the visual system. Parallel processing begins at the synapse between the photoreceptors and their postreceptoral neurons in the human retina. The integrity of this first neural connection is vital for normal visual processing downstream. Of the numerous elements necessary for proper functioning of this synaptic contact, dystrophin proteins in the eye play an important role. Deficiency of muscle dystrophin causes Duchenne muscular dystrophy (DMD), an X-linked disease that affects muscle function and leads to decreased life expectancy. In DMD patients, postreceptoral retinal mechanisms underlying scotopic and photopic vision and ON- and OFF-pathway responses are also altered. In this study, we recorded the electroretinogram (ERG) while preferentially activating the (red-green) opponent or the luminance pathway, and compared data from healthy participants (n = 16) with those of DMD patients (n = 10). The stimuli were heterochromatic sinusoidal modulations at a mean luminance of 200 cd/m2. The recordings allowed us also to analyze ON and OFF cone-driven retinal responses. We found significant differences in 12-Hz response amplitudes and phases between controls and DMD patients, with conditions with large luminance content resulting in larger response amplitudes in DMD patients compared to controls, whereas responses of DMD patients were smaller when pure chromatic modulation was given. The results suggest that dystrophin is required for the proper function of luminance and red-green cone opponent mechanisms in the human retina.

  19. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  20. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904

  1. Chromatic and Achromatic Spatial Resolution of Local Field Potentials in Awake Cortex

    PubMed Central

    Jansen, Michael; Li, Xiaobing; Lashgari, Reza; Kremkow, Jens; Bereshpolova, Yulia; Swadlow, Harvey A.; Zaidi, Qasim; Alonso, Jose-Manuel

    2015-01-01

    Local field potentials (LFPs) have become an important measure of neuronal population activity in the brain and could provide robust signals to guide the implant of visual cortical prosthesis in the future. However, it remains unclear whether LFPs can detect weak cortical responses (e.g., cortical responses to equiluminant color) and whether they have enough visual spatial resolution to distinguish different chromatic and achromatic stimulus patterns. By recording from awake behaving macaques in primary visual cortex, here we demonstrate that LFPs respond robustly to pure chromatic stimuli and exhibit ∼2.5 times lower spatial resolution for chromatic than achromatic stimulus patterns, a value that resembles the ratio of achromatic/chromatic resolution measured with psychophysical experiments in humans. We also show that, although the spatial resolution of LFP decays with visual eccentricity as is also the case for single neurons, LFPs have higher spatial resolution and show weaker response suppression to low spatial frequencies than spiking multiunit activity. These results indicate that LFP recordings are an excellent approach to measure spatial resolution from local populations of neurons in visual cortex including those responsive to color. PMID:25416722

  2. A neural measure of precision in visual working memory.

    PubMed

    Ester, Edward F; Anderson, David E; Serences, John T; Awh, Edward

    2013-05-01

    Recent studies suggest that the temporary storage of visual detail in working memory is mediated by sensory recruitment or sustained patterns of stimulus-specific activation within feature-selective regions of visual cortex. According to a strong version of this hypothesis, the relative "quality" of these patterns should determine the clarity of an individual's memory. Here, we provide a direct test of this claim. We used fMRI and a forward encoding model to characterize population-level orientation-selective responses in visual cortex while human participants held an oriented grating in memory. This analysis, which enables a precise quantitative description of multivoxel, population-level activity measured during working memory storage, revealed graded response profiles whose amplitudes were greatest for the remembered orientation and fell monotonically as the angular distance from this orientation increased. Moreover, interparticipant differences in the dispersion-but not the amplitude-of these response profiles were strongly correlated with performance on a concurrent memory recall task. These findings provide important new evidence linking the precision of sustained population-level responses in visual cortex and memory acuity.

  3. Simultaneous chromatic and luminance human electroretinogram responses.

    PubMed

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-07-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats' ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing.

  4. Effects of Spatial and Feature Attention on Disparity-Rendered Structure-From-Motion Stimuli in the Human Visual Cortex

    PubMed Central

    Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.

    2014-01-01

    An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974

  5. The Naked Truth: The Face and Body Sensitive N170 Response Is Enhanced for Nude Bodies

    PubMed Central

    Hietanen, Jari K.; Nummenmaa, Lauri

    2011-01-01

    Recent event-related potential studies have shown that the occipitotemporal N170 component - best known for its sensitivity to faces - is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior. PMID:22110574

  6. Human comfort response to random motions with a dominant transverse motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on the Langley Visual Motion Simulator with transverse acceleration inputs with various power spectra shapes and magnitudes. The results show only little influence of spectra shape on comfort response. The effects of magnitude on comfort response indicate the applicability of psychophysical precepts for comfort modeling.

  7. Human comfort response to random motions with a dominant longitudinal motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on the Langley Visual Motion Simulator with longitudinal acceleration inputs with various power spectra shapes and magnitudes. The results show only little influence of spectra shape on comfort response. The effects of magnitude on comfort response indicate the applicability of psychophysical precepts for comfort modeling.

  8. Human confort response to random motions with a dominant rolling motion

    NASA Technical Reports Server (NTRS)

    Stone, R. W., Jr.

    1975-01-01

    Subjective ride comfort response ratings were measured on a visual motion simulator with rolling velocity inputs with various power spectra shapes and magnitudes. The results show only little influence of spectra shape on comfort response. The effects of magnitude on comfort response indicate the applicability of psychophysical precepts for comfort modeling.

  9. Transcranial focused ultrasound stimulation of human primary visual cortex

    NASA Astrophysics Data System (ADS)

    Lee, Wonhye; Kim, Hyun-Chul; Jung, Yujin; Chung, Yong An; Song, In-Uk; Lee, Jong-Hwan; Yoo, Seung-Schik

    2016-09-01

    Transcranial focused ultrasound (FUS) is making progress as a new non-invasive mode of regional brain stimulation. Current evidence of FUS-mediated neurostimulation for humans has been limited to the observation of subjective sensory manifestations and electrophysiological responses, thus warranting the identification of stimulated brain regions. Here, we report FUS sonication of the primary visual cortex (V1) in humans, resulting in elicited activation not only from the sonicated brain area, but also from the network of regions involved in visual and higher-order cognitive processes (as revealed by simultaneous acquisition of blood-oxygenation-level-dependent functional magnetic resonance imaging). Accompanying phosphene perception was also reported. The electroencephalo graphic (EEG) responses showed distinct peaks associated with the stimulation. None of the participants showed any adverse effects from the sonication based on neuroimaging and neurological examinations. Retrospective numerical simulation of the acoustic profile showed the presence of individual variability in terms of the location and intensity of the acoustic focus. With exquisite spatial selectivity and capability for depth penetration, FUS may confer a unique utility in providing non-invasive stimulation of region-specific brain circuits for neuroscientific and therapeutic applications.

  10. Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.

    PubMed

    Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J

    2007-06-01

    The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.

  11. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music

    PubMed Central

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V.; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (−) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects. PMID:28421007

  12. Arousal Rules: An Empirical Investigation into the Aesthetic Experience of Cross-Modal Perception with Emotional Visual Music.

    PubMed

    Lee, Irene Eunyoung; Latchoumane, Charles-Francois V; Jeong, Jaeseung

    2017-01-01

    Emotional visual music is a promising tool for the study of aesthetic perception in human psychology; however, the production of such stimuli and the mechanisms of auditory-visual emotion perception remain poorly understood. In Experiment 1, we suggested a literature-based, directive approach to emotional visual music design, and inspected the emotional meanings thereof using the self-rated psychometric and electroencephalographic (EEG) responses of the viewers. A two-dimensional (2D) approach to the assessment of emotion (the valence-arousal plane) with frontal alpha power asymmetry EEG (as a proposed index of valence) validated our visual music as an emotional stimulus. In Experiment 2, we used our synthetic stimuli to investigate possible underlying mechanisms of affective evaluation mechanisms in relation to audio and visual integration conditions between modalities (namely congruent, complementation, or incongruent combinations). In this experiment, we found that, when arousal information between auditory and visual modalities was contradictory [for example, active (+) on the audio channel but passive (-) on the video channel], the perceived emotion of cross-modal perception (visual music) followed the channel conveying the stronger arousal. Moreover, we found that an enhancement effect (heightened and compacted in subjects' emotional responses) in the aesthetic perception of visual music might occur when the two channels contained contradictory arousal information and positive congruency in valence and texture/control. To the best of our knowledge, this work is the first to propose a literature-based directive production of emotional visual music prototypes and the validations thereof for the study of cross-modally evoked aesthetic experiences in human subjects.

  13. Brain-computer interface on the basis of EEG system Encephalan

    NASA Astrophysics Data System (ADS)

    Maksimenko, Vladimir; Badarin, Artem; Nedaivozov, Vladimir; Kirsanov, Daniil; Hramov, Alexander

    2018-04-01

    We have proposed brain-computer interface (BCI) for the estimation of the brain response on the presented visual tasks. Proposed BCI is based on the EEG recorder Encephalan-EEGR-19/26 (Medicom MTD, Russia) supplemented by a special home-made developed acquisition software. BCI is tested during experimental session while subject is perceiving the bistable visual stimuli and classifying them according to the interpretation. We have subjected the participant to the different external conditions and observed the significant decrease in the response, associated with the perceiving the bistable visual stimuli, during the presence of distraction. Based on the obtained results we have proposed possibility to use of BCI for estimation of the human alertness during solving the tasks required substantial visual attention.

  14. Pupil size directly modulates the feedforward response in human primary visual cortex independently of attention.

    PubMed

    Bombeke, Klaas; Duthoo, Wout; Mueller, Sven C; Hopf, Jens-Max; Boehler, C Nico

    2016-02-15

    Controversy revolves around the question of whether psychological factors like attention and emotion can influence the initial feedforward response in primary visual cortex (V1). Although traditionally, the electrophysiological correlate of this response in humans (the C1 component) has been found to be unaltered by psychological influences, a number of recent studies have described attentional and emotional modulations. Yet, research into psychological effects on the feedforward V1 response has neglected possible direct contributions of concomitant pupil-size modulations, which are known to also occur under various conditions of attentional load and emotional state. Here we tested the hypothesis that such pupil-size differences themselves directly affect the feedforward V1 response. We report data from two complementary experiments, in which we used procedures that modulate pupil size without differences in attentional load or emotion while simultaneously recording pupil-size and EEG data. Our results confirm that pupil size indeed directly influences the feedforward V1 response, showing an inverse relationship between pupil size and early V1 activity. While it is unclear in how far this effect represents a functionally-relevant adaptation, it identifies pupil-size differences as an important modulating factor of the feedforward response of V1 and could hence represent a confounding variable in research investigating the neural influence of psychological factors on early visual processing. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm.

    PubMed

    Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin

    2017-07-05

    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Simultaneous diffuse near-infrared imaging of hemodynamic and oxygenation changes and electroencephalographic measurements of neuronal activity in the human brain

    NASA Astrophysics Data System (ADS)

    Noponen, Tommi; Kicic, Dubravko; Kotilahti, Kalle; Kajava, Timo; Kahkonen, Seppo; Nissila, Ilkka; Merilainen, Pekka; Katila, Toivo

    2005-04-01

    Visually evoked hemodynamic responses and potentials were simultaneously measured using a 16-channel optical imaging instrument and a 60-channel electroencephalography instrument during normo-, hypo- and hypercapnia from three subjects. Flashing and pattern-reversed checkerboard stimuli were used. The study protocol included two counterbalanced measurements during both normo- and hypocapnia and normo- and hypercapnia. Hypocapnia was produced by controlled hyperventilation and hypercapnia by breathing carbon dioxide enriched air. Near-infrared imaging was also used to monitor the concentration changes of oxy- and deoxyhaemoglobin due to hypo- and hypercapnia. Hemodynamic responses and evoked potentials were successfully detected for each subject above the visual cortex. The latencies of the hemodynamic responses during hypocapnia were shorter whereas during hypercapnia they were longer when compared to the latencies during normocapnia. Hypocapnia tended to decrease the latencies of visually evoked potentials compared to those during normocapnia while hypercapnia did not show any consistent effect to the potentials. The developed measurement setup and the study protocol provide the opportunity to investigate the neurovascular coupling and the links between the baseline level of blood flow, electrical activity and hemodynamic responses in the human brain.

  17. Human prosaccades and antisaccades under risk: effects of penalties and rewards on visual selection and the value of actions.

    PubMed

    Ross, M; Lanyon, L J; Viswanathan, J; Manoach, D S; Barton, J J S

    2011-11-24

    Monkey studies report greater activity in the lateral intraparietal area and more efficient saccades when targets coincide with the location of prior reward cues, even when cue location does not indicate which responses will be rewarded. This suggests that reward can modulate spatial attention and visual selection independent of the "action value" of the motor response. Our goal was first to determine whether reward modulated visual selection similarly in humans, and next, to discover whether reward and penalty differed in effect, if cue effects were greater for cognitively demanding antisaccades, and if financial consequences that were contingent on stimulus location had spatially selective effects. We found that motivational cues reduced all latencies, more for reward than penalty. There was an "inhibition-of-return"-like effect at the location of the cue, but unlike the results in monkeys, cue valence did not modify this effect in prosaccades, and the inhibition-of-return effect was slightly increased rather than decreased in antisaccades. When financial consequences were contingent on target location, locations without reward or penalty consequences lost the benefits seen in noncontingent trials, whereas locations with consequences maintained their gains. We conclude that unlike monkeys, humans show reward effects not on visual selection but on the value of actions. The human saccadic system has both the capacity to enhance responses to multiple locations simultaneously, and the flexibility to focus motivational enhancement only on locations with financial consequences. Reward is more effective than penalty, and both interact with the additional attentional demands of the antisaccade task. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. The influence of spontaneous activity on stimulus processing in primary visual cortex.

    PubMed

    Schölvinck, M L; Friston, K J; Rees, G

    2012-02-01

    Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.

  19. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    PubMed

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.

  20. Direct evidence for attention-dependent influences of the frontal eye-fields on feature-responsive visual cortex.

    PubMed

    Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon

    2014-11-01

    Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.

  1. Theory of Visual Attention (TVA) applied to mice in the 5-choice serial reaction time task.

    PubMed

    Fitzpatrick, C M; Caballero-Puntiverio, M; Gether, U; Habekost, T; Bundesen, C; Vangkilde, S; Woldbye, D P D; Andreasen, J T; Petersen, A

    2017-03-01

    The 5-choice serial reaction time task (5-CSRTT) is widely used to measure rodent attentional functions. In humans, many attention studies in healthy and clinical populations have used testing based on Bundesen's Theory of Visual Attention (TVA) to estimate visual processing speeds and other parameters of attentional capacity. We aimed to bridge these research fields by modifying the 5-CSRTT's design and by mathematically modelling data to derive attentional parameters analogous to human TVA-based measures. C57BL/6 mice were tested in two 1-h sessions on consecutive days with a version of the 5-CSRTT where stimulus duration (SD) probe length was varied based on information from previous TVA studies. Thereafter, a scopolamine hydrobromide (HBr; 0.125 or 0.25 mg/kg) pharmacological challenge was undertaken, using a Latin square design. Mean score values were modelled using a new three-parameter version of TVA to obtain estimates of visual processing speeds, visual thresholds and motor response baselines in each mouse. The parameter estimates for each animal were reliable across sessions, showing that the data were stable enough to support analysis on an individual level. Scopolamine HBr dose-dependently reduced 5-CSRTT attentional performance while also increasing reward collection latency at the highest dose. Upon TVA modelling, scopolamine HBr significantly reduced visual processing speed at both doses, while having less pronounced effects on visual thresholds and motor response baselines. This study shows for the first time how 5-CSRTT performance in mice can be mathematically modelled to yield estimates of attentional capacity that are directly comparable to estimates from human studies.

  2. Binocular and Monocular Depth Cues in Online Feedback Control of 3-D Pointing Movement

    PubMed Central

    Hu, Bo; Knill, David C.

    2012-01-01

    Previous work has shown that humans continuously use visual feedback of the hand to control goal-directed movements online. In most studies, visual error signals were predominantly in the image plane and thus were available in an observer’s retinal image. We investigate how humans use visual feedback about finger depth provided by binocular and monocular depth cues to control pointing movements. When binocularly viewing a scene in which the hand movement was made in free space, subjects were about 60 ms slower in responding to perturbations in depth than in the image plane. When monocularly viewing a scene designed to maximize the available monocular cues to finger depth (motion, changing size and cast shadows), subjects showed no response to perturbations in depth. Thus, binocular cues from the finger are critical to effective online control of hand movements in depth. An optimal feedback controller that takes into account of the low peripheral stereoacuity and inherent ambiguity in cast shadows can explain the difference in response time in the binocular conditions and lack of response in monocular conditions. PMID:21724567

  3. Differential Classical Conditioning Selectively Heightens Response Gain of Neural Population Activity in Human Visual Cortex

    PubMed Central

    Song, Inkyung; Keil, Andreas

    2015-01-01

    Neutral cues, after being reliably paired with noxious events, prompt defensive engagement and amplified sensory responses. To examine the neurophysiology underlying these adaptive changes, we quantified the contrast-response function of visual cortical population activity during differential aversive conditioning. Steady-state visual evoked potentials (ssVEPs) were recorded while participants discriminated the orientation of rapidly flickering grating stimuli. During each trial, luminance contrast of the gratings was slowly increased and then decreased. Right-tilted gratings (CS+) were paired with loud white noise but left-tilted gratings (CS−) were not. The contrast-following waveform envelope of ssVEPs showed selective amplification of the CS+ only during the high-contrast stage of the viewing epoch. Findings support the notion that motivational relevance, learned in a time frame of minutes, affects vision through a response gain mechanism. PMID:24981277

  4. Using video playbacks to study visual communication in a marine fish, Salaria pavo.

    PubMed

    Gonçalves; Oliveira; Körner; Poschadel; Schlupp

    2000-09-01

    Video playbacks have been successfully applied to the study of visual communication in several groups of animals. However, this technique is controversial as video monitors are designed with the human visual system in mind. Differences between the visual capabilities of humans and other animals will lead to perceptually different interpretations of video images. We simultaneously presented males and females of the peacock blenny, Salaria pavo, with a live conspecific male and an online video image of the same individual. Video images failed to elicit appropriate responses. Males were aggressive towards the live male but not towards video images of the same male. Similarly, females courted only the live male and spent more time near this stimulus. In contrast, females of the gynogenetic poecilid Poecilia formosa showed an equal preference for a live and video image of a P. mexicana male, suggesting a response to live animals as strong as to video images. We discuss differences between the species that may explain their opposite reaction to video images. Copyright 2000 The Association for the Study of Animal Behaviour.

  5. Coarse-Scale Biases for Spirals and Orientation in Human Visual Cortex

    PubMed Central

    Heeger, David J.

    2013-01-01

    Multivariate decoding analyses are widely applied to functional magnetic resonance imaging (fMRI) data, but there is controversy over their interpretation. Orientation decoding in primary visual cortex (V1) reflects coarse-scale biases, including an over-representation of radial orientations. But fMRI responses to clockwise and counter-clockwise spirals can also be decoded. Because these stimuli are matched for radial orientation, while differing in local orientation, it has been argued that fine-scale columnar selectivity for orientation contributes to orientation decoding. We measured fMRI responses in human V1 to both oriented gratings and spirals. Responses to oriented gratings exhibited a complex topography, including a radial bias that was most pronounced in the peripheral representation, and a near-vertical bias that was most pronounced near the foveal representation. Responses to clockwise and counter-clockwise spirals also exhibited coarse-scale organization, at the scale of entire visual quadrants. The preference of each voxel for clockwise or counter-clockwise spirals was predicted from the preferences of that voxel for orientation and spatial position (i.e., within the retinotopic map). Our results demonstrate a bias for local stimulus orientation that has a coarse spatial scale, is robust across stimulus classes (spirals and gratings), and suffices to explain decoding from fMRI responses in V1. PMID:24336733

  6. Nonlinear dynamics of cortical responses to color in the human cVEP.

    PubMed

    Nunez, Valerie; Shapley, Robert M; Gordon, James

    2017-09-01

    The main finding of this paper is that the human visual cortex responds in a very nonlinear manner to the color contrast of pure color patterns. We examined human cortical responses to color checkerboard patterns at many color contrasts, measuring the chromatic visual evoked potential (cVEP) with a dense electrode array. Cortical topography of the cVEPs showed that they were localized near the posterior electrode at position Oz, indicating that the primary cortex (V1) was the major source of responses. The choice of fine spatial patterns as stimuli caused the cVEP response to be driven by double-opponent neurons in V1. The cVEP waveform revealed nonlinear color signal processing in the V1 cortex. The cVEP time-to-peak decreased and the waveform's shape was markedly narrower with increasing cone contrast. Comparison of the linear dynamics of retinal and lateral geniculate nucleus responses with the nonlinear dynamics of the cortical cVEP indicated that the nonlinear dynamics originated in the V1 cortex. The nature of the nonlinearity is a kind of automatic gain control that adjusts cortical dynamics to be faster when color contrast is greater.

  7. Dissimilar processing of emotional facial expressions in human and monkey temporal cortex

    PubMed Central

    Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2013-01-01

    Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2 × 2 × 2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, selective neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded selectively to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions. PMID:23142071

  8. Functional specialization and convergence in the occipito-temporal cortex supporting haptic and visual identification of human faces and body parts: an fMRI study.

    PubMed

    Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J

    2009-10-01

    Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.

  9. The Anatomical and Functional Organization of the Human Visual Pulvinar

    PubMed Central

    Pinsk, Mark A.; Kastner, Sabine

    2015-01-01

    The pulvinar is the largest nucleus in the primate thalamus and contains extensive, reciprocal connections with visual cortex. Although the anatomical and functional organization of the pulvinar has been extensively studied in old and new world monkeys, little is known about the organization of the human pulvinar. Using high-resolution functional magnetic resonance imaging at 3 T, we identified two visual field maps within the ventral pulvinar, referred to as vPul1 and vPul2. Both maps contain an inversion of contralateral visual space with the upper visual field represented ventrally and the lower visual field represented dorsally. vPul1 and vPul2 border each other at the vertical meridian and share a representation of foveal space with iso-eccentricity lines extending across areal borders. Additional, coarse representations of contralateral visual space were identified within ventral medial and dorsal lateral portions of the pulvinar. Connectivity analyses on functional and diffusion imaging data revealed a strong distinction in thalamocortical connectivity between the dorsal and ventral pulvinar. The two maps in the ventral pulvinar were most strongly connected with early and extrastriate visual areas. Given the shared eccentricity representation and similarity in cortical connectivity, we propose that these two maps form a distinct visual field map cluster and perform related functions. The dorsal pulvinar was most strongly connected with parietal and frontal areas. The functional and anatomical organization observed within the human pulvinar was similar to the organization of the pulvinar in other primate species. SIGNIFICANCE STATEMENT The anatomical organization and basic response properties of the visual pulvinar have been extensively studied in nonhuman primates. Yet, relatively little is known about the functional and anatomical organization of the human pulvinar. Using neuroimaging, we found multiple representations of visual space within the ventral human pulvinar and extensive topographically organized connectivity with visual cortex. This organization is similar to other nonhuman primates and provides additional support that the general organization of the pulvinar is consistent across the primate phylogenetic tree. These results suggest that the human pulvinar, like other primates, is well positioned to regulate corticocortical communication. PMID:26156987

  10. The economics of motion perception and invariants of visual sensitivity.

    PubMed

    Gepshtein, Sergei; Tyukin, Ivan; Kubovy, Michael

    2007-06-21

    Neural systems face the challenge of optimizing their performance with limited resources, just as economic systems do. Here, we use tools of neoclassical economic theory to explore how a frugal visual system should use a limited number of neurons to optimize perception of motion. The theory prescribes that vision should allocate its resources to different conditions of stimulation according to the degree of balance between measurement uncertainties and stimulus uncertainties. We find that human vision approximately follows the optimal prescription. The equilibrium theory explains why human visual sensitivity is distributed the way it is and why qualitatively different regimes of apparent motion are observed at different speeds. The theory offers a new normative framework for understanding the mechanisms of visual sensitivity at the threshold of visibility and above the threshold and predicts large-scale changes in visual sensitivity in response to changes in the statistics of stimulation and system goals.

  11. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face.

    PubMed

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  12. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  13. Patterned light flash evoked short latency activity in the visual system of visually normal and in amblyopic subjects.

    PubMed

    Sjöström, A; Abrahamsson, M

    1994-04-01

    In a previous experimental study on anaesthetized cat it was shown that a short latency (35-40 ms) cortical potential changed polarity due to the presence or absence of a pattern in the flash stimulus. The results suggested one pathway of neuronal activation in the cortex to a pattern that was within the level of resolution and another to patterns that were not. It was implied that a similar difference in impulse transmission to pattern and non-pattern stimuli may be recorded in humans. The present paper describes recordings of the short-latency visual evoked response to varying light flash checkerboard pattern stimuli of high intensity in visually normal and amblyopic children and adults. When stimulating the normal eye a visual evoked response potential with a peak latency between 35 to 40 ms showed a polarity change to patterned compared to non-patterned stimulation. The visual evoked response resolution limit could be correlated to a visual acuity of 0.5 and below. In amblyopic eyes the shift in polarity was recorded at the acuity limit level. The latency of the pattern depending potential was increased in patients with amblyopia compared to normal, but not directly related to amblyopic degree. It is concluded that the short latency, visual evoked response that mainly represents the retino-geniculo-cortical activation may be used to estimate visual resolution below 0.5 in acuity level.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Latent binocular function in amblyopia.

    PubMed

    Chadnova, Eva; Reynaud, Alexandre; Clavagnier, Simon; Hess, Robert F

    2017-11-01

    Recently, psychophysical studies have shown that humans with amblyopia do have binocular function that is not normally revealed due to dominant suppressive interactions under normal viewing conditions. Here we use magnetoencephalography (MEG) combined with dichoptic visual stimulation to investigate the underlying binocular function in humans with amblyopia for stimuli that, because of their temporal properties, would be expected to bypass suppressive effects and to reveal any underlying binocular function. We recorded contrast response functions in visual cortical area V1 of amblyopes and normal observers using a steady state visually evoked responses (SSVER) protocol. We used stimuli that were frequency-tagged at 4Hz and 6Hz that allowed identification of the responses from each eye and were of a sufficiently high temporal frequency (>3Hz) to bypass suppression. To characterize binocular function, we compared dichoptic masking between the two eyes in normal and amblyopic participants as well as interocular phase differences in the two groups. We observed that the primary visual cortex responds less to the stimulation of the amblyopic eye compared to the fellow eye. The pattern of interaction in the amblyopic visual system however was not significantly different between the amblyopic and fellow eyes. However, the amblyopic suppressive interactions were lower than those observed in the binocular system of our normal observers. Furthermore, we identified an interocular processing delay of approximately 20ms in our amblyopic group. To conclude, when suppression is greatly reduced, such as the case with our stimulation above 3Hz, the amblyopic visual system exhibits a lack of binocular interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Visual Search Efficiency is Greater for Human Faces Compared to Animal Faces

    PubMed Central

    Simpson, Elizabeth A.; Mertins, Haley L.; Yee, Krysten; Fullerton, Alison; Jakobsen, Krisztina V.

    2015-01-01

    The Animate Monitoring Hypothesis proposes that humans and animals were the most important categories of visual stimuli for ancestral humans to monitor, as they presented important challenges and opportunities for survival and reproduction; however, it remains unknown whether animal faces are located as efficiently as human faces. We tested this hypothesis by examining whether human, primate, and mammal faces elicit similarly efficient searches, or whether human faces are privileged. In the first three experiments, participants located a target (human, primate, or mammal face) among distractors (non-face objects). We found fixations on human faces were faster and more accurate than primate faces, even when controlling for search category specificity. A final experiment revealed that, even when task-irrelevant, human faces slowed searches for non-faces, suggesting some bottom-up processing may be responsible for the human face search efficiency advantage. PMID:24962122

  16. Electrophysiological indices of surround suppression in humans

    PubMed Central

    Vanegas, M. Isabel; Blangero, Annabelle

    2014-01-01

    Surround suppression is a well-known example of contextual interaction in visual cortical neurophysiology, whereby the neural response to a stimulus presented within a neuron's classical receptive field is suppressed by surrounding stimuli. Human psychophysical reports present an obvious analog to the effects seen at the single-neuron level: stimuli are perceived as lower-contrast when embedded in a surround. Here we report on a visual paradigm that provides relatively direct, straightforward indices of surround suppression in human electrophysiology, enabling us to reproduce several well-known neurophysiological and psychophysical effects, and to conduct new analyses of temporal trends and retinal location effects. Steady-state visual evoked potentials (SSVEP) elicited by flickering “foreground” stimuli were measured in the context of various static surround patterns. Early visual cortex geometry and retinotopic organization were exploited to enhance SSVEP amplitude. The foreground response was strongly suppressed as a monotonic function of surround contrast. Furthermore, suppression was stronger for surrounds of matching orientation than orthogonally-oriented ones, and stronger at peripheral than foveal locations. These patterns were reproduced in psychophysical reports of perceived contrast, and peripheral electrophysiological suppression effects correlated with psychophysical effects across subjects. Temporal analysis of SSVEP amplitude revealed short-term contrast adaptation effects that caused the foreground signal to either fall or grow over time, depending on the relative contrast of the surround, consistent with stronger adaptation of the suppressive drive. This electrophysiology paradigm has clinical potential in indexing not just visual deficits but possibly gain control deficits expressed more widely in the disordered brain. PMID:25411464

  17. Multichannel optical mapping: investigation of depth information

    NASA Astrophysics Data System (ADS)

    Sase, Ichiro; Eda, Hideo; Seiyama, Akitoshi; Tanabe, Hiroki C.; Takatsuki, Akira; Yanagida, Toshio

    2001-06-01

    Near infrared (NIR) light has become a powerful tool for non-invasive imaging of human brain activity. Many systems have been developed to capture the changes in regional brain blood flow and hemoglobin oxygenation, which occur in the human cortex in response to neural activity. We have developed a multi-channel reflectance imaging system, which can be used as a `mapping device' and also as a `multi-channel spectrophotometer'. In the present study, we visualized changes in the hemodynamics of the human occipital region in multiple ways. (1) Stimulating left and right primary visual cortex independently by showing sector shaped checkerboards sequentially over the contralateral visual field, resulted in corresponding changes in the hemodynamics observed by `mapping' measurement. (2) Simultaneous measurement of functional-MRI and NIR (changes in total hemoglobin) during visual stimulation showed good spatial and temporal correlation with each other. (3) Placing multiple channels densely over the occipital region demonstrated spatial patterns more precisely, and depth information was also acquired by placing each pair of illumination and detection fibers at various distances. These results indicate that optical method can provide data for 3D analysis of human brain functions.

  18. The effect of human engagement depicted in contextual photographs on the visual attention patterns of adults with traumatic brain injury.

    PubMed

    Thiessen, Amber; Brown, Jessica; Beukelman, David; Hux, Karen

    2017-09-01

    Photographs are a frequently employed tool for the rehabilitation of adults with traumatic brain injury (TBI). Speech-language pathologists (SLPs) working with these individuals must select photos that are easily identifiable and meaningful to their clients. In this investigation, we examined the visual attention response to camera- (i.e., depicted human figure looking toward camera) and task-engaged (i.e., depicted human figure looking at and touching an object) contextual photographs for a group of adults with TBI and a group of adults without neurological conditions. Eye-tracking technology served to accurately and objectively measure visual fixations. Although differences were hypothesized given the cognitive deficits associated with TBI, study results revealed little difference in the visual fixation patterns of adults with and without TBI. Specifically, both groups of participants tended to fixate rapidly on the depicted human figure and fixate more on objects in which a human figure was task-engaged than when a human figure was camera-engaged. These results indicate that strategic placement of human figures in a contextual photograph may modify the way in which individuals with TBI visually attend to and interpret photographs. In addition, task-engagement appears to have a guiding effect on visual attention that may be of benefit to SLPs hoping to select more effective contextual photographs for their clients with TBI. Finally, the limited differences in visual attention patterns between individuals with TBI and their age and gender matched peers without neurological impairments indicates that these two groups find similar photograph regions to be worthy of visual fixation. Readers will gain knowledge regarding the photograph selection process for individuals with TBI. In addition, readers will be able to identify camera- and task-engaged photographs and to explain why task-engagement may be a beneficial component of contextual photographs. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Integration of visual and motion cues for simulator requirements and ride quality investigation. [computerized simulation of aircraft landing, visual perception of aircraft pilots

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1975-01-01

    Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.

  20. Neural responses to salient visual stimuli.

    PubMed Central

    Morris, J S; Friston, K J; Dolan, R J

    1997-01-01

    The neural mechanisms involved in the selective processing of salient or behaviourally important stimuli are uncertain. We used an aversive conditioning paradigm in human volunteer subjects to manipulate the salience of visual stimuli (emotionally expressive faces) presented during positron emission tomography (PET) neuroimaging. Increases in salience, and conflicts between the innate and acquired value of the stimuli, produced augmented activation of the pulvinar nucleus of the right thalamus. Furthermore, this pulvinar activity correlated positively with responses in structures hypothesized to mediate value in the brain right amygdala and basal forebrain (including the cholinergic nucleus basalis of Meynert). The results provide evidence that the pulvinar nucleus of the thalamus plays a crucial modulatory role in selective visual processing, and that changes in perceptual salience are mediated by value-dependent plasticity in pulvinar responses. PMID:9178546

  1. Preprocessing of emotional visual information in the human piriform cortex.

    PubMed

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  2. Decoding brain responses to pixelized images in the primary visual cortex: implications for visual cortical prostheses

    PubMed Central

    Guo, Bing-bing; Zheng, Xiao-lin; Lu, Zhen-gang; Wang, Xing; Yin, Zheng-qin; Hou, Wen-sheng; Meng, Ming

    2015-01-01

    Visual cortical prostheses have the potential to restore partial vision. Still limited by the low-resolution visual percepts provided by visual cortical prostheses, implant wearers can currently only “see” pixelized images, and how to obtain the specific brain responses to different pixelized images in the primary visual cortex (the implant area) is still unknown. We conducted a functional magnetic resonance imaging experiment on normal human participants to investigate the brain activation patterns in response to 18 different pixelized images. There were 100 voxels in the brain activation pattern that were selected from the primary visual cortex, and voxel size was 4 mm × 4 mm × 4 mm. Multi-voxel pattern analysis was used to test if these 18 different brain activation patterns were specific. We chose a Linear Support Vector Machine (LSVM) as the classifier in this study. The results showed that the classification accuracies of different brain activation patterns were significantly above chance level, which suggests that the classifier can successfully distinguish the brain activation patterns. Our results suggest that the specific brain activation patterns to different pixelized images can be obtained in the primary visual cortex using a 4 mm × 4 mm × 4 mm voxel size and a 100-voxel pattern. PMID:26692860

  3. A Role for MST Neurons in Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Perrone, J. A.

    1994-01-01

    A template model of human visual self-motion perception, which uses neurophysiologically realistic "heading detectors", is consistent with numerous human psychophysical results including the failure of humans to estimate their heading (direction of forward translation) accurately under certain visual conditions. We tested the model detectors with stimuli used by others in single-unit studies. The detectors showed emergent properties similar to those of MST neurons: (1) Sensitivity to non-preferred flow; Each detector is tuned to a specific combination of flow components and its response is systematically reduced by the addition of nonpreferred flow, and (2) Position invariance; The detectors maintain their apparent preference for particular flow components over large regions of their receptive fields. It has been argued that this latter property is incompatible with MST playing a role in heading perception. The model however demonstrates how neurons with the above response properties could still support accurate heading estimation within extrastriate cortical maps.

  4. Dynamics of normalization underlying masking in human visual cortex.

    PubMed

    Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M

    2012-02-22

    Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.

  5. Arterial spin labeling fMRI measurements of decreased blood flow in primary visual cortex correlates with decreased visual function in human glaucoma.

    PubMed

    Duncan, Robert O; Sample, Pamela A; Bowd, Christopher; Weinreb, Robert N; Zangwill, Linda M

    2012-05-01

    Altered metabolic activity has been identified as a potential contributing factor to the neurodegeneration associated with primary open angle glaucoma (POAG). Consequently, we sought to determine whether there is a relationship between the loss of visual function in human glaucoma and resting blood perfusion within primary visual cortex (V1). Arterial spin labeling (ASL) functional magnetic resonance imaging (fMRI) was conducted in 10 participants with POAG. Resting cerebral blood flow (CBF) was measured from dorsal and ventral V1. Behavioral measurements of visual function were obtained using standard automated perimetry (SAP), short-wavelength automated perimetry (SWAP), and frequency-doubling technology perimetry (FDT). Measurements of CBF were compared to differences in visual function for the superior and inferior hemifield. Differences in CBF between ventral and dorsal V1 were correlated with differences in visual function for the superior versus inferior visual field. A statistical bootstrapping analysis indicated that the observed correlations between fMRI responses and measurements of visual function for SAP (r=0.49), SWAP (r=0.63), and FDT (r=0.43) were statistically significant (all p<0.05). Resting blood perfusion in human V1 is correlated with the loss of visual function in POAG. Altered CBF may be a contributing factor to glaucomatous optic neuropathy, or it may be an indication of post-retinal glaucomatous neurodegeneration caused by damage to the retinal ganglion cells. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Attentional Preference and Experience: II. An Exploratory Longitudinal Study of the Effects of Visual Familiarity and Responsiveness.

    ERIC Educational Resources Information Center

    Uzgiris, Ina C.; Hunt, J. McV.

    The human infant is now considered capable of active informational interaction with the environment. This study tested certain hypotheses concerning the nature of that interaction. These hypotheses, developed partly from Piaget's work, are (1) that repeated visual encounters with a stimulus pattern leads first to attentional preference for that…

  7. Steady-state visually evoked potential correlates of human body perception.

    PubMed

    Giabbiconi, Claire-Marie; Jurilj, Verena; Gruber, Thomas; Vocks, Silja

    2016-11-01

    In cognitive neuroscience, interest in the neuronal basis underlying the processing of human bodies is steadily increasing. Based on functional magnetic resonance imaging studies, it is assumed that the processing of pictures of human bodies is anchored in a network of specialized brain areas comprising the extrastriate and the fusiform body area (EBA, FBA). An alternative to examine the dynamics within these networks is electroencephalography, more specifically so-called steady-state visually evoked potentials (SSVEPs). In SSVEP tasks, a visual stimulus is presented repetitively at a predefined flickering rate and typically elicits a continuous oscillatory brain response at this frequency. This brain response is characterized by an excellent signal-to-noise ratio-a major advantage for source reconstructions. The main goal of present study was to demonstrate the feasibility of this method to study human body perception. To that end, we presented pictures of bodies and contrasted the resulting SSVEPs to two control conditions, i.e., non-objects and pictures of everyday objects (chairs). We found specific SSVEPs amplitude differences between bodies and both control conditions. Source reconstructions localized the SSVEP generators to a network of temporal, occipital and parietal areas. Interestingly, only body perception resulted in activity differences in middle temporal and lateral occipitotemporal areas, most likely reflecting the EBA/FBA.

  8. Human postural responses to motion of real and virtual visual environments under different support base conditions.

    PubMed

    Mergner, T; Schweigart, G; Maurer, C; Blümle, A

    2005-12-01

    The role of visual orientation cues for human control of upright stance is still not well understood. We, therefore, investigated stance control during motion of a visual scene as stimulus, varying the stimulus parameters and the contribution from other senses (vestibular and leg proprioceptive cues present or absent). Eight normal subjects and three patients with chronic bilateral loss of vestibular function participated. They stood on a motion platform inside a cabin with an optokinetic pattern on its interior walls. The cabin was sinusoidally rotated in anterior-posterior (a-p) direction with the horizontal rotation axis through the ankle joints (f=0.05-0.4 Hz; A (max)=0.25 degrees -4 degrees ; v (max)=0.08-10 degrees /s). The subjects' centre of mass (COM) angular position was calculated from opto-electronically measured body sway parameters. The platform was either kept stationary or moved by coupling its position 1:1 to a-p hip position ('body sway referenced', BSR, platform condition), by which proprioceptive feedback of ankle joint angle became inactivated. The visual stimulus evoked in-phase COM excursions (visual responses) in all subjects. (1) In normal subjects on a stationary platform, the visual responses showed saturation with both increasing velocity and displacement of the visual stimulus. The saturation showed up abruptly when visually evoked COM velocity and displacement reached approximately 0.1 degrees /s and 0.1 degrees , respectively. (2) In normal subjects on a BSR platform (proprioceptive feedback disabled), the visual responses showed similar saturation characteristics, but at clearly higher COM velocity and displacement values ( approximately 1 degrees /s and 1 degrees , respectively). (3) In patients on a stationary platform (no vestibular cues), the visual responses were basically similar to those of the normal subjects, apart from somewhat higher gain values and less-pronounced saturation effects. (4) In patients on a BSR platform (no vestibular and proprioceptive cues, presumably only somatosensory graviceptive and visual cues), the visual responses showed an abnormal increase in gain with increasing stimulus frequency in addition to a displacement saturation. On the normal subjects we performed additional experiments in which we varied the gain of the visual response by using a 'virtual reality' visual stimulus or by applying small lateral platform tilts. This did not affect the saturation characteristics of the visual response to a considerable degree. We compared the present results to previous psychophysical findings on motion perception, noting similarities of the saturation characteristics in (1) with leg proprioceptive detection thresholds of approximately 0.1 degrees /s and 0.1 degrees and those in (2) with vestibular detection thresholds of 1 degrees /s and 1 degrees , respectively. From the psychophysical data one might hypothesise that a proprioceptive postural mechanism limits the visually evoked body excursions if these excursions exceed 0.1 degrees /s and 0.1 degrees in condition (1) and that a vestibular mechanism is doing so at 1 degrees /s and 1 degrees in (2). To better understand this, we performed computer simulations using a posture control model with multiple sensory feedbacks. We had recently designed the model to describe postural responses to body pull and platform tilt stimuli. Here, we added a visual input and adjusted its gain to fit the simulated data to the experimental data. The saturation characteristics of the visual responses of the normals were well mimicked by the simulations. They were caused by central thresholds of proprioceptive, vestibular and somatosensory signals in the model, which, however, differed from the psychophysical thresholds. Yet, we demonstrate in a theoretical approach that for condition (1) the model can be made monomodal proprioceptive with the psychophysical 0.1 degrees /s and 0.1 degrees thresholds, and for (2) monomodal vestibular with the psychophysical 1 degrees /s and 1 degrees thresholds, and still shows the corresponding saturation characteristics (whereas our original model covers both conditions without adjustments). The model simulations also predicted the almost normal visual responses of patients on a stationary platform and their clearly abnormal responses on a BSR platform.

  9. Comparing capacity coefficient and dual task assessment of visual multitasking workload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaha, Leslie M.

    Capacity coefficient analysis could offer a theoretically grounded alternative approach to subjective measures and dual task assessment of cognitive workload. Workload capacity or workload efficiency is a human information processing modeling construct defined as the amount of information that can be processed by the visual cognitive system given a specified of amount of time. In this paper, I explore the relationship between capacity coefficient analysis of workload efficiency and dual task response time measures. To capture multitasking performance, I examine how the relatively simple assumptions underlying the capacity construct generalize beyond the single visual decision making tasks. The fundamental toolsmore » for measuring workload efficiency are the integrated hazard and reverse hazard functions of response times, which are defined by log transforms of the response time distribution. These functions are used in the capacity coefficient analysis to provide a functional assessment of the amount of work completed by the cognitive system over the entire range of response times. For the study of visual multitasking, capacity coefficient analysis enables a comparison of visual information throughput as the number of tasks increases from one to two to any number of simultaneous tasks. I illustrate the use of capacity coefficients for visual multitasking on sample data from dynamic multitasking in the modified Multi-attribute Task Battery.« less

  10. Hemisphere-Dependent Attentional Modulation of Human Parietal Visual Field Representations

    PubMed Central

    Silver, Michael A.

    2015-01-01

    Posterior parietal cortex contains several areas defined by topographically organized maps of the contralateral visual field. However, recent studies suggest that ipsilateral stimuli can elicit larger responses in the right than left hemisphere within these areas, depending on task demands. Here we determined the effects of spatial attention on the set of visual field locations (the population receptive field [pRF]) that evoked a response for each voxel in human topographic parietal cortex. A two-dimensional Gaussian was used to model the pRF in each voxel, and we measured the effects of attention on not only the center (preferred visual field location) but also the size (visual field extent) of the pRF. In both hemispheres, larger pRFs were associated with attending to the mapping stimulus compared with attending to a central fixation point. In the left hemisphere, attending to the stimulus also resulted in more peripheral preferred locations of contralateral representations, compared with attending fixation. These effects of attention on both pRF size and preferred location preserved contralateral representations in the left hemisphere. In contrast, attentional modulation of pRF size but not preferred location significantly increased representation of the ipsilateral (right) visual hemifield in right parietal cortex. Thus, attention effects in topographic parietal cortex exhibit hemispheric asymmetries similar to those seen in hemispatial neglect. Our findings suggest potential mechanisms underlying the behavioral deficits associated with this disorder. PMID:25589746

  11. Robust selectivity to two-object images in human visual cortex

    PubMed Central

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  12. Simultaneous chromatic and luminance human electroretinogram responses

    PubMed Central

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-01-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211

  13. Chromatic and Achromatic Spatial Resolution of Local Field Potentials in Awake Cortex.

    PubMed

    Jansen, Michael; Li, Xiaobing; Lashgari, Reza; Kremkow, Jens; Bereshpolova, Yulia; Swadlow, Harvey A; Zaidi, Qasim; Alonso, Jose-Manuel

    2015-10-01

    Local field potentials (LFPs) have become an important measure of neuronal population activity in the brain and could provide robust signals to guide the implant of visual cortical prosthesis in the future. However, it remains unclear whether LFPs can detect weak cortical responses (e.g., cortical responses to equiluminant color) and whether they have enough visual spatial resolution to distinguish different chromatic and achromatic stimulus patterns. By recording from awake behaving macaques in primary visual cortex, here we demonstrate that LFPs respond robustly to pure chromatic stimuli and exhibit ∼2.5 times lower spatial resolution for chromatic than achromatic stimulus patterns, a value that resembles the ratio of achromatic/chromatic resolution measured with psychophysical experiments in humans. We also show that, although the spatial resolution of LFP decays with visual eccentricity as is also the case for single neurons, LFPs have higher spatial resolution and show weaker response suppression to low spatial frequencies than spiking multiunit activity. These results indicate that LFP recordings are an excellent approach to measure spatial resolution from local populations of neurons in visual cortex including those responsive to color. © The Author 2014. Published by Oxford University Press.

  14. Visual- and Vestibular-Autonomic Influence on Short-Term Cardiovascular Regulatory Mechanisms

    NASA Technical Reports Server (NTRS)

    Mullen, Thomas J.; Ramsdell, Craig D.

    1999-01-01

    This synergy project was a one-year effort conducted cooperatively by members of the NSBRI Cardiovascular Alterations and Neurovestibular Adaptation Teams in collaboration with NASA Johnson Space Center (JSC) colleagues. The objective of this study was to evaluate visual autonomic interactions on short-term cardiovascular regulatory mechanisms. Based on established visual-vestibular and vestibular-autonomic shared neural pathways, we hypothesized that visually induced changes in orientation will trigger autonomic cardiovascular reflexes. A second objective was to compare baroreflex changes during postural changes as measured with the new Cardiovascular System Identification (CSI) technique with those measured using a neck barocuff. While the neck barocuff stimulates only the carotid baroreceptors, CSI provides a measure of overall baroreflex responsiveness. This study involved a repeated measures design with 16 healthy human subjects (8 M, 8 F) to examine cardiovascular regulatory responses during actual and virtual head-upright tilts. Baroreflex sensitivity was first evaluated with subjects in supine and upright positions during actual tilt-table testing using both neck barocuff and CSI methods. The responses to actual tilts during this first session were then compared to responses during visually induced tilt and/or rotation obtained during a second session.

  15. [Sensory loss and brain reorganization].

    PubMed

    Fortin, Madeleine; Voss, Patrice; Lassonde, Maryse; Lepore, Franco

    2007-11-01

    It is without a doubt that humans are first and foremost visual beings. Even though the other sensory modalities provide us with valuable information, it is vision that generally offers the most reliable and detailed information concerning our immediate surroundings. It is therefore not surprising that nearly a third of the human brain processes, in one way or another, visual information. But what happens when the visual information no longer reaches these brain regions responsible for processing it? Indeed numerous medical conditions such as congenital glaucoma, retinis pigmentosa and retinal detachment, to name a few, can disrupt the visual system and lead to blindness. So, do the brain areas responsible for processing visual stimuli simply shut down and become non-functional? Do they become dead weight and simply stop contributing to cognitive and sensory processes? Current data suggests that this is not the case. Quite the contrary, it would seem that congenitally blind individuals benefit from the recruitment of these areas by other sensory modalities to carry out non-visual tasks. In fact, our laboratory has been studying blindness and its consequences on both the brain and behaviour for many years now. We have shown that blind individuals demonstrate exceptional hearing abilities. This finding holds true for stimuli originating from both near and far space. It also holds true, under certain circumstances, for those who lost their sight later in life, beyond a period generally believed to limit the brain changes following the loss of sight. In the case of the early blind, we have shown their ability to localize sounds is strongly correlated with activity in the occipital cortex (the location of the visual processing), demonstrating that these areas are functionally engaged by the task. Therefore it would seem that the plastic nature of the human brain allows them to make new use of the cerebral areas normally dedicated to visual processing.

  16. Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning

    PubMed Central

    Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka

    2012-01-01

    Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849

  17. Cortical Representations of Symbols, Objects, and Faces Are Pruned Back during Early Childhood

    PubMed Central

    Pinel, Philippe; Dehaene, Stanislas; Pelphrey, Kevin A.

    2011-01-01

    Regions of human ventral extrastriate visual cortex develop specializations for natural categories (e.g., faces) and cultural artifacts (e.g., words). In adults, category-based specializations manifest as greater neural responses in visual regions of the brain (e.g., fusiform gyrus) to some categories over others. However, few studies have examined how these specializations originate in the brains of children. Moreover, it is as yet unknown whether the development of visual specializations hinges on “increases” in the response to the preferred categories, “decreases” in the responses to nonpreferred categories, or “both.” This question is relevant to a long-standing debate concerning whether neural development is driven by building up or pruning back representations. To explore these questions, we measured patterns of visual activity in 4-year-old children for 4 categories (faces, letters, numbers, and shoes) using functional magnetic resonance imaging. We report 2 key findings regarding the development of visual categories in the brain: 1) the categories “faces” and “symbols” doubly dissociate in the fusiform gyrus before children can read and 2) the development of category-specific responses in young children depends on cortical responses to nonpreferred categories that decrease as preferred category knowledge is acquired. PMID:20457691

  18. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  19. Common neural substrates for visual working memory and attention.

    PubMed

    Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J

    2007-06-01

    Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.

  20. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  1. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  2. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  3. Acuity-independent effects of visual deprivation on human visual cortex

    PubMed Central

    Hou, Chuan; Pettet, Mark W.; Norcia, Anthony M.

    2014-01-01

    Visual development depends on sensory input during an early developmental critical period. Deviation of the pointing direction of the two eyes (strabismus) or chronic optical blur (anisometropia) separately and together can disrupt the formation of normal binocular interactions and the development of spatial processing, leading to a loss of stereopsis and visual acuity known as amblyopia. To shed new light on how these two different forms of visual deprivation affect the development of visual cortex, we used event-related potentials (ERPs) to study the temporal evolution of visual responses in patients who had experienced either strabismus or anisometropia early in life. To make a specific statement about the locus of deprivation effects, we took advantage of a stimulation paradigm in which we could measure deprivation effects that arise either before or after a configuration-specific response to illusory contours (ICs). Extraction of ICs is known to first occur in extrastriate visual areas. Our ERP measurements indicate that deprivation via strabismus affects both the early part of the evoked response that occurs before ICs are formed as well as the later IC-selective response. Importantly, these effects are found in the normal-acuity nonamblyopic eyes of strabismic amblyopes and in both eyes of strabismic patients without amblyopia. The nonamblyopic eyes of anisometropic amblyopes, by contrast, are normal. Our results indicate that beyond the well-known effects of strabismus on the development of normal binocularity, it also affects the early stages of monocular feature processing in an acuity-independent fashion. PMID:25024230

  4. The role of vision in auditory distance perception.

    PubMed

    Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro

    2012-01-01

    In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.

  5. Act quickly, decide later: long-latency visual processing underlies perceptual decisions but not reflexive behavior.

    PubMed

    Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F

    2011-12-01

    Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.

  6. Pilot response to peripheral vision cues during instrument flying tasks.

    DOT National Transportation Integrated Search

    1968-02-01

    In an attempt to more closely associate the visual aspects of instrument flying with that of contact flight, a study was made of human response to peripheral vision cues relating to aircraft roll attitude. Pilots, ranging from 52 to 12,000 flying hou...

  7. Haptic perception and body representation in lateral and medial occipito-temporal cortices.

    PubMed

    Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M

    2011-04-01

    Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. The effect of early visual deprivation on the neural bases of multisensory processing.

    PubMed

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2015-06-01

    Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Prior Knowledge about Objects Determines Neural Color Representation in Human Visual Cortex.

    PubMed

    Vandenbroucke, A R E; Fahrenfort, J J; Meuwese, J D I; Scholte, H S; Lamme, V A F

    2016-04-01

    To create subjective experience, our brain must translate physical stimulus input by incorporating prior knowledge and expectations. For example, we perceive color and not wavelength information, and this in part depends on our past experience with colored objects ( Hansen et al. 2006; Mitterer and de Ruiter 2008). Here, we investigated the influence of object knowledge on the neural substrates underlying subjective color vision. In a functional magnetic resonance imaging experiment, human subjects viewed a color that lay midway between red and green (ambiguous with respect to its distance from red and green) presented on either typical red (e.g., tomato), typical green (e.g., clover), or semantically meaningless (nonsense) objects. Using decoding techniques, we could predict whether subjects viewed the ambiguous color on typical red or typical green objects based on the neural response of veridical red and green. This shift of neural response for the ambiguous color did not occur for nonsense objects. The modulation of neural responses was observed in visual areas (V3, V4, VO1, lateral occipital complex) involved in color and object processing, as well as frontal areas. This demonstrates that object memory influences wavelength information relatively early in the human visual system to produce subjective color vision. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Are neural correlates of visual consciousness retinotopic?

    PubMed

    ffytche, Dominic H; Pins, Delphine

    2003-11-14

    Some visual neurons code what we see, their defining characteristic being a response profile which mirrors conscious percepts rather than veridical sensory attributes. One issue yet to be resolved is whether, within a given cortical area, conscious visual perception relates to diffuse activity across the entire population of such cells or focal activity within the sub-population mapping the location of the perceived stimulus. Here we investigate the issue in the human brain with fMRI, using a threshold stimulation technique to dissociate perceptual from non-perceptual activity. Our results point to a retinotopic organisation of perceptual activity in early visual areas, with independent perceptual activations for different regions of visual space.

  11. Learning and Recognition of a Non-conscious Sequence of Events in Human Primary Visual Cortex.

    PubMed

    Rosenthal, Clive R; Andrews, Samantha K; Antoniades, Chrystalina A; Kennard, Christopher; Soto, David

    2016-03-21

    Human primary visual cortex (V1) has long been associated with learning simple low-level visual discriminations [1] and is classically considered outside of neural systems that support high-level cognitive behavior in contexts that differ from the original conditions of learning, such as recognition memory [2, 3]. Here, we used a novel fMRI-based dichoptic masking protocol-designed to induce activity in V1, without modulation from visual awareness-to test whether human V1 is implicated in human observers rapidly learning and then later (15-20 min) recognizing a non-conscious and complex (second-order) visuospatial sequence. Learning was associated with a change in V1 activity, as part of a temporo-occipital and basal ganglia network, which is at variance with the cortico-cerebellar network identified in prior studies of "implicit" sequence learning that involved motor responses and visible stimuli (e.g., [4]). Recognition memory was associated with V1 activity, as part of a temporo-occipital network involving the hippocampus, under conditions that were not imputable to mechanisms associated with conscious retrieval. Notably, the V1 responses during learning and recognition separately predicted non-conscious recognition memory, and functional coupling between V1 and the hippocampus was enhanced for old retrieval cues. The results provide a basis for novel hypotheses about the signals that can drive recognition memory, because these data (1) identify human V1 with a memory network that can code complex associative serial visuospatial information and support later non-conscious recognition memory-guided behavior (cf. [5]) and (2) align with mouse models of experience-dependent V1 plasticity in learning and memory [6]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Sharpening vision by adapting to flicker.

    PubMed

    Arnold, Derek H; Williams, Jeremy D; Phipps, Natasha E; Goodale, Melvyn A

    2016-11-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision-allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because "blur" signals are mitigated.

  13. Sharpening vision by adapting to flicker

    PubMed Central

    Arnold, Derek H.; Williams, Jeremy D.; Phipps, Natasha E.; Goodale, Melvyn A.

    2016-01-01

    Human vision is surprisingly malleable. A static stimulus can seem to move after prolonged exposure to movement (the motion aftereffect), and exposure to tilted lines can make vertical lines seem oppositely tilted (the tilt aftereffect). The paradigm used to induce such distortions (adaptation) can provide powerful insights into the computations underlying human visual experience. Previously spatial form and stimulus dynamics were thought to be encoded independently, but here we show that adaptation to stimulus dynamics can sharpen form perception. We find that fast flicker adaptation (FFAd) shifts the tuning of face perception to higher spatial frequencies, enhances the acuity of spatial vision—allowing people to localize inputs with greater precision and to read finer scaled text, and it selectively reduces sensitivity to coarse-scale form signals. These findings are consistent with two interrelated influences: FFAd reduces the responsiveness of magnocellular neurons (which are important for encoding dynamics, but can have poor spatial resolution), and magnocellular responses contribute coarse spatial scale information when the visual system synthesizes form signals. Consequently, when magnocellular responses are mitigated via FFAd, human form perception is transiently sharpened because “blur” signals are mitigated. PMID:27791115

  14. Visuals and Visualisation of Human Body Systems

    ERIC Educational Resources Information Center

    Mathai, Sindhu; Ramadas, Jayashree

    2009-01-01

    This paper explores the role of diagrams and text in middle school students' understanding and visualisation of human body systems. We develop a common framework based on structure and function to assess students' responses across diagram and verbal modes. Visualisation is defined in terms of understanding transformations on structure and relating…

  15. Modeling human perception and estimation of kinematic responses during aircraft landing

    NASA Technical Reports Server (NTRS)

    Schmidt, David K.; Silk, Anthony B.

    1988-01-01

    The thrust of this research is to determine estimation accuracy of aircraft responses based on observed cues. By developing the geometric relationships between the outside visual scene and the kinematics during landing, visual and kinesthetic cues available to the pilot were modeled. Both fovial and peripheral vision was examined. The objective was to first determine estimation accuracy in a variety of flight conditions, and second to ascertain which parameters are most important and lead to the best achievable accuracy in estimating the actual vehicle response. It was found that altitude estimation was very sensitive to the FOV. For this model the motion cue of perceived vertical acceleration was shown to be less important than the visual cues. The inclusion of runway geometry in the visual scene increased estimation accuracy in most cases. Finally, it was shown that for this model if the pilot has an incorrect internal model of the system kinematics the choice of observations thought to be 'optimal' may in fact be suboptimal.

  16. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  17. Explaining the Colavita visual dominance effect.

    PubMed

    Spence, Charles

    2009-01-01

    The last couple of years have seen a resurgence of interest in the Colavita visual dominance effect. In the basic experimental paradigm, a random series of auditory, visual, and audiovisual stimuli are presented to participants who are instructed to make one response whenever they see a visual target and another response whenever they hear an auditory target. Many studies have now shown that participants sometimes fail to respond to auditory targets when they are presented at the same time as visual targets (i.e., on the bimodal trials), despite the fact that they have no problems in responding to the auditory and visual stimuli when they are presented individually. The existence of the Colavita visual dominance effect provides an intriguing contrast with the results of the many other recent studies showing the superiority of multisensory (over unisensory) information processing in humans. Various accounts have been put forward over the years in order to try and explain the effect, including the suggestion that it reflects nothing more than an underlying bias to attend to the visual modality. Here, the empirical literature on the Colavita visual dominance effect is reviewed and some of the key factors modulating the effect highlighted. The available research has now provided evidence against all previous accounts of the Colavita effect. A novel explanation of the Colavita effect is therefore put forward here, one that is based on the latest findings highlighting the asymmetrical effect that auditory and visual stimuli exert on people's responses to stimuli presented in the other modality.

  18. Perceptual asymmetry in texture perception.

    PubMed

    Williams, D; Julesz, B

    1992-07-15

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.

  19. Pattern reversal responses in man and cat: a comparison.

    PubMed

    Schuurmans, R P; Berninger, T

    1984-01-01

    In 42 enucleated and arterially perfused cat eyes, graded potentials were recorded from the retina (ERG) and from the optic nerve ( ONR ) in response to checker-board stimuli, reversing at a low temporal frequency in a square wave mode. The ERG and ONR responses show an almost perfect duplication of the response to each reversal of the pattern and exhibit, in contrast to luminance responses, striking similarities in response characteristics such as amplitude, wave shape and time course. Furthermore, the amplitude versus check size plots coincide in both responses. In cat, pattern reversal responses can be recorded from 74 to 9 min of arc, correlating to the cat's visual resolution. In man, almost identical responses can be recorded for the pattern ERG. However, in accordance with the difference in visual resolution in man and cat, a parallel shift for the human pattern reversal ERG response to higher spatial frequencies is observed.

  20. Perceptually lossless fractal image compression

    NASA Astrophysics Data System (ADS)

    Lin, Huawu; Venetsanopoulos, Anastasios N.

    1996-02-01

    According to the collage theorem, the encoding distortion for fractal image compression is directly related to the metric used in the encoding process. In this paper, we introduce a perceptually meaningful distortion measure based on the human visual system's nonlinear response to luminance and the visual masking effects. Blackwell's psychophysical raw data on contrast threshold are first interpolated as a function of background luminance and visual angle, and are then used as an error upper bound for perceptually lossless image compression. For a variety of images, experimental results show that the algorithm produces a compression ratio of 8:1 to 10:1 without introducing visual artifacts.

  1. Attention to Color Sharpens Neural Population Tuning via Feedback Processing in the Human Visual Cortex Hierarchy.

    PubMed

    Bartsch, Mandy V; Loewe, Kristian; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Tsotsos, John K; Hopf, Jens-Max

    2017-10-25

    Attention can facilitate the selection of elementary object features such as color, orientation, or motion. This is referred to as feature-based attention and it is commonly attributed to a modulation of the gain and tuning of feature-selective units in visual cortex. Although gain mechanisms are well characterized, little is known about the cortical processes underlying the sharpening of feature selectivity. Here, we show with high-resolution magnetoencephalography in human observers (men and women) that sharpened selectivity for a particular color arises from feedback processing in the human visual cortex hierarchy. To assess color selectivity, we analyze the response to a color probe that varies in color distance from an attended color target. We find that attention causes an initial gain enhancement in anterior ventral extrastriate cortex that is coarsely selective for the target color and transitions within ∼100 ms into a sharper tuned profile in more posterior ventral occipital cortex. We conclude that attention sharpens selectivity over time by attenuating the response at lower levels of the cortical hierarchy to color values neighboring the target in color space. These observations support computational models proposing that attention tunes feature selectivity in visual cortex through backward-propagating attenuation of units less tuned to the target. SIGNIFICANCE STATEMENT Whether searching for your car, a particular item of clothing, or just obeying traffic lights, in everyday life, we must select items based on color. But how does attention allow us to select a specific color? Here, we use high spatiotemporal resolution neuromagnetic recordings to examine how color selectivity emerges in the human brain. We find that color selectivity evolves as a coarse to fine process from higher to lower levels within the visual cortex hierarchy. Our observations support computational models proposing that feature selectivity increases over time by attenuating the responses of less-selective cells in lower-level brain areas. These data emphasize that color perception involves multiple areas across a hierarchy of regions, interacting with each other in a complex, recursive manner. Copyright © 2017 the authors 0270-6474/17/3710346-12$15.00/0.

  2. Do bees like Van Gogh's Sunflowers?

    NASA Astrophysics Data System (ADS)

    Chittka, Lars; Walker, Julian

    2006-06-01

    Flower colours have evolved over 100 million years to address the colour vision of their bee pollinators. In a much more rapid process, cultural (and horticultural) evolution has produced images of flowers that stimulate aesthetic responses in human observers. The colour vision and analysis of visual patterns differ in several respects between humans and bees. Here, a behavioural ecologist and an installation artist present bumblebees with reproductions of paintings highly appreciated in Western society, such as Van Gogh's Sunflowers. We use this unconventional approach in the hope to raise awareness for between-species differences in visual perception, and to provoke thinking about the implications of biology in human aesthetics and the relationship between object representation and its biological connotations.

  3. Attraction of position preference by spatial attention throughout human visual cortex.

    PubMed

    Klein, Barrie P; Harvey, Ben M; Dumoulin, Serge O

    2014-10-01

    Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Changes in Women’s Facial Skin Color over the Ovulatory Cycle are Not Detectable by the Human Visual System

    PubMed Central

    Burriss, Robert P.; Troscianko, Jolyon; Lovell, P. George; Fulford, Anthony J. C.; Stevens, Martin; Quigley, Rachael; Payne, Jenny; Saxton, Tamsin K.; Rowland, Hannah M.

    2015-01-01

    Human ovulation is not advertised, as it is in several primate species, by conspicuous sexual swellings. However, there is increasing evidence that the attractiveness of women’s body odor, voice, and facial appearance peak during the fertile phase of their ovulatory cycle. Cycle effects on facial attractiveness may be underpinned by changes in facial skin color, but it is not clear if skin color varies cyclically in humans or if any changes are detectable. To test these questions we photographed women daily for at least one cycle. Changes in facial skin redness and luminance were then quantified by mapping the digital images to human long, medium, and shortwave visual receptors. We find cyclic variation in skin redness, but not luminance. Redness decreases rapidly after menstrual onset, increases in the days before ovulation, and remains high through the luteal phase. However, we also show that this variation is unlikely to be detectable by the human visual system. We conclude that changes in skin color are not responsible for the effects of the ovulatory cycle on women’s attractiveness. PMID:26134671

  5. Changes in Women's Facial Skin Color over the Ovulatory Cycle are Not Detectable by the Human Visual System.

    PubMed

    Burriss, Robert P; Troscianko, Jolyon; Lovell, P George; Fulford, Anthony J C; Stevens, Martin; Quigley, Rachael; Payne, Jenny; Saxton, Tamsin K; Rowland, Hannah M

    2015-01-01

    Human ovulation is not advertised, as it is in several primate species, by conspicuous sexual swellings. However, there is increasing evidence that the attractiveness of women's body odor, voice, and facial appearance peak during the fertile phase of their ovulatory cycle. Cycle effects on facial attractiveness may be underpinned by changes in facial skin color, but it is not clear if skin color varies cyclically in humans or if any changes are detectable. To test these questions we photographed women daily for at least one cycle. Changes in facial skin redness and luminance were then quantified by mapping the digital images to human long, medium, and shortwave visual receptors. We find cyclic variation in skin redness, but not luminance. Redness decreases rapidly after menstrual onset, increases in the days before ovulation, and remains high through the luteal phase. However, we also show that this variation is unlikely to be detectable by the human visual system. We conclude that changes in skin color are not responsible for the effects of the ovulatory cycle on women's attractiveness.

  6. Attention improves encoding of task-relevant features in the human visual cortex.

    PubMed

    Jehee, Janneke F M; Brady, Devin K; Tong, Frank

    2011-06-01

    When spatial attention is directed toward a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer's task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in visual cortical areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature and not when the contrast of the grating had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks, but color-selective responses were enhanced only when color was task relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information.

  7. Addition of visual noise boosts evoked potential-based brain-computer interface.

    PubMed

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-05-14

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.

  8. Relationship between BOLD amplitude and pattern classification of orientation-selective activity in the human visual cortex.

    PubMed

    Tong, Frank; Harrison, Stephenie A; Dewey, John A; Kamitani, Yukiyasu

    2012-11-15

    Orientation-selective responses can be decoded from fMRI activity patterns in the human visual cortex, using multivariate pattern analysis (MVPA). To what extent do these feature-selective activity patterns depend on the strength and quality of the sensory input, and might the reliability of these activity patterns be predicted by the gross amplitude of the stimulus-driven BOLD response? Observers viewed oriented gratings that varied in luminance contrast (4, 20 or 100%) or spatial frequency (0.25, 1.0 or 4.0 cpd). As predicted, activity patterns in early visual areas led to better discrimination of orientations presented at high than low contrast, with greater effects of contrast found in area V1 than in V3. A second experiment revealed generally better decoding of orientations at low or moderate as compared to high spatial frequencies. Interestingly however, V1 exhibited a relative advantage at discriminating high spatial frequency orientations, consistent with the finer scale of representation in the primary visual cortex. In both experiments, the reliability of these orientation-selective activity patterns was well predicted by the average BOLD amplitude in each region of interest, as indicated by correlation analyses, as well as decoding applied to a simple model of voxel responses to simulated orientation columns. Moreover, individual differences in decoding accuracy could be predicted by the signal-to-noise ratio of an individual's BOLD response. Our results indicate that decoding accuracy can be well predicted by incorporating the amplitude of the BOLD response into simple simulation models of cortical selectivity; such models could prove useful in future applications of fMRI pattern classification. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Relationship between BOLD amplitude and pattern classification of orientation-selective activity in the human visual cortex

    PubMed Central

    Tong, Frank; Harrison, Stephenie A.; Dewey, John A.; Kamitani, Yukiyasu

    2012-01-01

    Orientation-selective responses can be decoded from fMRI activity patterns in the human visual cortex, using multivariate pattern analysis (MVPA). To what extent do these feature-selective activity patterns depend on the strength and quality of the sensory input, and might the reliability of these activity patterns be predicted by the gross amplitude of the stimulus-driven BOLD response? Observers viewed oriented gratings that varied in luminance contrast (4, 20 or 100%) or spatial frequency (0.25, 1.0 or 4.0 cpd). As predicted, activity patterns in early visual areas led to better discrimination of orientations presented at high than low contrast, with greater effects of contrast found in area V1 than in V3. A second experiment revealed generally better decoding of orientations at low or moderate as compared to high spatial frequencies. Interestingly however, V1 exhibited a relative advantage at discriminating high spatial frequency orientations, consistent with the finer scale of representation in the primary visual cortex. In both experiments, the reliability of these orientation-selective activity patterns was well predicted by the average BOLD amplitude in each region of interest, as indicated by correlation analyses, as well as decoding applied to a simple model of voxel responses to simulated orientation columns. Moreover, individual differences in decoding accuracy could be predicted by the signal-to-noise ratio of an individual's BOLD response. Our results indicate that decoding accuracy can be well predicted by incorporating the amplitude of the BOLD response into simple simulation models of cortical selectivity; such models could prove useful in future applications of fMRI pattern classification. PMID:22917989

  10. Octopuses (Enteroctopus dofleini) recognize individual humans.

    PubMed

    Anderson, Roland C; Mather, Jennifer A; Monette, Mathieu Q; Zimsen, Stephanie R M

    2010-01-01

    This study exposed 8 Enteroctopus dofleini separately to 2 unfamiliar individual humans over a 2-week period under differing circumstances. One person consistently fed the octopuses and the other touched them with a bristly stick. Each human recorded octopus body patterns, behaviors, and respiration rates directly after each treatment. At the end of 2 weeks, a body pattern (a dark Eyebar) and 2 behaviors (reaching arms toward or away from the tester and funnel direction) were significantly different in response to the 2 humans. The respiration rate of the 4 larger octopuses changed significantly in response to the 2 treatments; however, there was no significant difference in the 4 smaller octopuses' respiration. Octopuses' ability to recognize humans enlarges our knowledge of the perceptual ability of this nonhuman animal, which depends heavily on learning in response to visual information. Any training paradigm should take such individual recognition into consideration as it could significantly alter the octopuses' responses.

  11. The role of the human pulvinar in visual attention and action: evidence from temporal-order judgment, saccade decision, and antisaccade tasks.

    PubMed

    Arend, Isabel; Machado, Liana; Ward, Robert; McGrath, Michelle; Ro, Tony; Rafal, Robert D

    2008-01-01

    The pulvinar nucleus of the thalamus has been considered as a key structure for visual attention functions (Grieve, K.L. et al. (2000). Trends Neurosci., 23: 35-39; Shipp, S. (2003). Philos. Trans. R. Soc. Lond. B Biol. Sci., 358(1438): 1605-1624). During the past several years, we have studied the role of the human pulvinar in visual attention and oculomotor behaviour by testing a small group of patients with unilateral pulvinar lesions. Here we summarize some of these findings, and present new evidence for the role of this structure in both eye movements and visual attention through two versions of a temporal-order judgment task and an antisaccade task. Pulvinar damage induces an ipsilesional bias in perceptual temporal-order judgments and in saccadic decision, and also increases the latency of antisaccades away from contralesional targets. The demonstration that pulvinar damage affects both attention and oculomotor behaviour highlights the role of this structure in the integration of visual and oculomotor signals and, more generally, its role in flexibly linking visual stimuli with context-specific motor responses.

  12. Lightness computation by the human visual system

    NASA Astrophysics Data System (ADS)

    Rudd, Michael E.

    2017-05-01

    A model of achromatic color computation by the human visual system is presented, which is shown to account in an exact quantitative way for a large body of appearance matching data collected with simple visual displays. The model equations are closely related to those of the original Retinex model of Land and McCann. However, the present model differs in important ways from Land and McCann's theory in that it invokes additional biological and perceptual mechanisms, including contrast gain control, different inherent neural gains for incremental, and decremental luminance steps, and two types of top-down influence on the perceptual weights applied to local luminance steps in the display: edge classification and spatial integration attentional windowing. Arguments are presented to support the claim that these various visual processes must be instantiated by a particular underlying neural architecture. By pointing to correspondences between the architecture of the model and findings from visual neurophysiology, this paper suggests that edge classification involves a top-down gating of neural edge responses in early visual cortex (cortical areas V1 and/or V2) while spatial integration windowing occurs in cortical area V4 or beyond.

  13. Human striatal activation during adjustment of the response criterion in visual word recognition.

    PubMed

    Kuchinke, Lars; Hofmann, Markus J; Jacobs, Arthur M; Frühholz, Sascha; Tamm, Sascha; Herrmann, Manfred

    2011-02-01

    Results of recent computational modelling studies suggest that a general function of the striatum in human cognition is related to shifting decision criteria in selection processes. We used functional magnetic resonance imaging (fMRI) in 21 healthy subjects to examine the hemodynamic responses when subjects shift their response criterion on a trial-by-trial basis in the lexical decision paradigm. Trial-by-trial criterion setting is obtained when subjects respond faster in trials following a word trial than in trials following nonword trials - irrespective of the lexicality of the current trial. Since selection demands are equally high in the current trials, we expected to observe neural activations that are related to response criterion shifting. The behavioural data show sequential effects with faster responses in trials following word trials compared to trials following nonword trials, suggesting that subjects shifted their response criterion on a trial-by-trial basis. The neural responses revealed a signal increase in the striatum only in trials following word trials. This striatal activation is therefore likely to be related to response criterion setting. It demonstrates a role of the striatum in shifting decision criteria in visual word recognition, which cannot be attributed to pure error-related processing or the selection of a preferred response. Copyright © 2010 Elsevier Inc. All rights reserved.

  14. Using a System Identification Approach to Investigate Subtask Control during Human Locomotion

    PubMed Central

    Logan, David; Kiemel, Tim; Jeka, John J.

    2017-01-01

    Here we apply a control theoretic view of movement to the behavior of human locomotion with the goal of using perturbations to learn about subtask control. Controlling one's speed and maintaining upright posture are two critical subtasks, or underlying functions, of human locomotion. How the nervous system simultaneously controls these two subtasks was investigated in this study. Continuous visual and mechanical perturbations were applied concurrently to subjects (n = 20) as probes to investigate these two subtasks during treadmill walking. Novel application of harmonic transfer function (HTF) analysis to human motor behavior was used, and these HTFs were converted to the time-domain based representation of phase-dependent impulse response functions (ϕIRFs). These ϕIRFs were used to identify the mapping from perturbation inputs to kinematic and electromyographic (EMG) outputs throughout the phases of the gait cycle. Mechanical perturbations caused an initial, passive change in trunk orientation and, at some phases of stimulus presentation, a corrective trunk EMG and orientation response. Visual perturbations elicited a trunk EMG response prior to a trunk orientation response, which was subsequently followed by an anterior-posterior displacement response. This finding supports the notion that there is a temporal hierarchy of functional subtasks during locomotion in which the control of upper-body posture precedes other subtasks. Moreover, the novel analysis we apply has the potential to probe a broad range of rhythmic behaviors to better understand their neural control. PMID:28123365

  15. Evidence for unlimited capacity processing of simple features in visual cortex

    PubMed Central

    White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.

    2017-01-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964

  16. Long-Lasting Crossmodal Cortical Reorganization Triggered by Brief Postnatal Visual Deprivation.

    PubMed

    Collignon, Olivier; Dormal, Giulia; de Heering, Adelaide; Lepore, Franco; Lewis, Terri L; Maurer, Daphne

    2015-09-21

    Animal and human studies have demonstrated that transient visual deprivation early in life, even for a very short period, permanently alters the response properties of neurons in the visual cortex and leads to corresponding behavioral visual deficits. While it is acknowledged that early-onset and longstanding blindness leads the occipital cortex to respond to non-visual stimulation, it remains unknown whether a short and transient period of postnatal visual deprivation is sufficient to trigger crossmodal reorganization that persists after years of visual experience. In the present study, we characterized brain responses to auditory stimuli in 11 adults who had been deprived of all patterned vision at birth by congenital cataracts in both eyes until they were treated at 9 to 238 days of age. When compared to controls with typical visual experience, the cataract-reversal group showed enhanced auditory-driven activity in focal visual regions. A combination of dynamic causal modeling with Bayesian model selection indicated that this auditory-driven activity in the occipital cortex was better explained by direct cortico-cortical connections with the primary auditory cortex than by subcortical connections. Thus, a short and transient period of visual deprivation early in life leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Neurophysiological Estimates of Human Performance Capabilities in Aerospace Systems

    DTIC Science & Technology

    1975-01-27

    effects on the visual system (in lateral geniculate bodies and optic cortex) depending on the frequency of auditory stimulation. 27 SECTION VI...of spa- tial positions. Correct responses were rewarded with food. EEG activity was recorded in the hippocampus, hypothalamus and lateral geniculate ...movement or an object movement reduce transmission of visual information through the lateral geniculate nucleus. This may be a mechanism for saccadic

  18. Double Dissociation of Conditioning and Declarative Knowledge Relative to the Amygdala and Hippocampus in Humans

    NASA Astrophysics Data System (ADS)

    Bechara, Antoine; Tranel, Daniel; Damasio, Hanna; Adolphs, Ralph; Rockland, Charles; Damasio, Antonio R.

    1995-08-01

    A patient with selective bilateral damage to the amygdala did not acquire conditioned autonomic responses to visual or auditory stimuli but did acquire the declarative facts about which visual or auditory stimuli were paired with the unconditioned stimulus. By contrast, a patient with selective bilateral damage to the hippocampus failed to acquire the facts but did acquire the conditioning. Finally, a patient with bilateral damage to both amygdala and hippocampal formation acquired neither the conditioning nor the facts. These findings demonstrate a double dissociation of conditioning and declarative knowledge relative to the human amygdala and hippocampus.

  19. Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains.

    PubMed

    Sklar, A E; Sarter, N B

    1999-12-01

    Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.

  20. Noninvasive imaging of human skin hemodynamics using a digital red-green-blue camera

    NASA Astrophysics Data System (ADS)

    Nishidate, Izumi; Tanaka, Noriyuki; Kawase, Tatsuya; Maeda, Takaaki; Yuasa, Tomonori; Aizu, Yoshihisa; Yuasa, Tetsuya; Niizeki, Kyuichi

    2011-08-01

    In order to visualize human skin hemodynamics, we investigated a method that is specifically developed for the visualization of concentrations of oxygenated blood, deoxygenated blood, and melanin in skin tissue from digital RGB color images. Images of total blood concentration and oxygen saturation can also be reconstructed from the results of oxygenated and deoxygenated blood. Experiments using tissue-like agar gel phantoms demonstrated the ability of the developed method to quantitatively visualize the transition from an oxygenated blood to a deoxygenated blood in dermis. In vivo imaging of the chromophore concentrations and tissue oxygen saturation in the skin of the human hand are performed for 14 subjects during upper limb occlusion at 50 and 250 mm Hg. The response of the total blood concentration in the skin acquired by this method and forearm volume changes obtained from the conventional strain-gauge plethysmograph were comparable during the upper arm occlusion at pressures of both 50 and 250 mm Hg. The results presented in the present paper indicate the possibility of visualizing the hemodynamics of subsurface skin tissue.

  1. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  2. Exploration of complex visual feature spaces for object perception

    PubMed Central

    Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.

    2014-01-01

    The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408

  3. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces.

    PubMed

    Dima, Diana C; Perry, Gavin; Messaritaki, Eirini; Zhang, Jiaxiang; Singh, Krish D

    2018-06-08

    Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  4. Visual impairment in FOXG1-mutated individuals and mice.

    PubMed

    Boggio, E M; Pancrazi, L; Gennaro, M; Lo Rizzo, C; Mari, F; Meloni, I; Ariani, F; Panighini, A; Novelli, E; Biagioni, M; Strettoi, E; Hayek, J; Rufa, A; Pizzorusso, T; Renieri, A; Costa, M

    2016-06-02

    The Forkead Box G1 (FOXG1 in humans, Foxg1 in mice) gene encodes for a DNA-binding transcription factor, essential for the development of the telencephalon in mammalian forebrain. Mutations in FOXG1 have been reported to be involved in the onset of Rett Syndrome, for which sequence alterations of MECP2 and CDKL5 are known. While visual alterations are not classical hallmarks of Rett syndrome, an increasing body of evidence shows visual impairment in patients and in MeCP2 and CDKL5 animal models. Herein we focused on the functional role of FOXG1 in the visual system of animal models (Foxg1(+/Cre) mice) and of a cohort of subjects carrying FOXG1 mutations or deletions. Visual physiology of Foxg1(+/Cre) mice was assessed by visually evoked potentials, which revealed a significant reduction in response amplitude and visual acuity with respect to wild-type littermates. Morphological investigation showed abnormalities in the organization of excitatory/inhibitory circuits in the visual cortex. No alterations were observed in retinal structure. By examining a cohort of FOXG1-mutated individuals with a panel of neuro-ophthalmological assessments, we found that all of them exhibited visual alterations compatible with high-level visual dysfunctions. In conclusion our data show that Foxg1 haploinsufficiency results in an impairment of mouse and human visual cortical function. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. A Comparison of the Visual Attention Patterns of People With Aphasia and Adults Without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes.

    PubMed

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-04-01

    The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.

  6. Real-time detection and discrimination of visual perception using electrocorticographic signals

    NASA Astrophysics Data System (ADS)

    Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.

    2018-06-01

    Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.

  7. Spatiotemporal dynamics underlying object completion in human ventral visual cortex.

    PubMed

    Tang, Hanlin; Buia, Calin; Madhavan, Radhika; Crone, Nathan E; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2014-08-06

    Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Neuroesthetics and healthcare design.

    PubMed

    Nanda, Upali; Pati, Debajyoti; McCurry, Katie

    2009-01-01

    While there is a growing consciousness about the importance of visually pleasing environments in healthcare design, little is known about the key underlying mechanisms that enable aesthetics to play an instrumental role in the caregiving process. Hence it is often one of the first items to be value engineered. Aesthetics has (rightfully) been provided preferential consideration in such pleasure settings such as museums and recreational facilities; but in healthcare settings it is often considered expendable. Should it be? In this paper the authors share evidence that visual stimuli undergo an aesthetic evaluation process in the human brain by default, even when not prompted; that responses to visual stimuli may be immediate and emotional; and that aesthetics can be a source of pleasure, a fundamental perceptual reward that can help mitigate the stress of a healthcare environment. The authors also provide examples of studies that address the role of specific visual elements and visual principles in aesthetic evaluations and emotional responses. Finally, they discuss the implications of these findings for the design of art and architecture in healthcare.

  9. Selective attention determines emotional responses to novel visual stimuli.

    PubMed

    Raymond, Jane E; Fenske, Mark J; Tavassoli, Nader T

    2003-11-01

    Distinct complex brain systems support selective attention and emotion, but connections between them suggest that human behavior should reflect reciprocal interactions of these systems. Although there is ample evidence that emotional stimuli modulate attentional processes, it is not known whether attention influences emotional behavior. Here we show that evaluation of the emotional tone (cheery/dreary) of complex but meaningless visual patterns can be modulated by the prior attentional state (attending vs. ignoring) used to process each pattern in a visual selection task. Previously ignored patterns were evaluated more negatively than either previously attended or novel patterns. Furthermore, this emotional devaluation of distracting stimuli was robust across different emotional contexts and response scales. Finding that negative affective responses are specifically generated for ignored stimuli points to a new functional role for attention and elaborates the link between attention and emotion. This finding also casts doubt on the conventional marketing wisdom that any exposure is good exposure.

  10. Subthalamic nucleus detects unnatural android movement.

    PubMed

    Ikeda, Takashi; Hirata, Masayuki; Kasaki, Masashi; Alimardani, Maryam; Matsushita, Kojiro; Yamamoto, Tomoyuki; Nishio, Shuichi; Ishiguro, Hiroshi

    2017-12-19

    An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.

  11. Developmental trajectory of neural specialization for letter and number visual processing.

    PubMed

    Park, Joonkoo; van den Berg, Berry; Chiang, Crystal; Woldorff, Marty G; Brannon, Elizabeth M

    2018-05-01

    Adult neuroimaging studies have demonstrated dissociable neural activation patterns in the visual cortex in response to letters (Latin alphabet) and numbers (Arabic numerals), which suggest a strong experiential influence of reading and mathematics on the human visual system. Here, developmental trajectories in the event-related potential (ERP) patterns evoked by visual processing of letters, numbers, and false fonts were examined in four different age groups (7-, 10-, 15-year-olds, and young adults). The 15-year-olds and adults showed greater neural sensitivity to letters over numbers in the left visual cortex and the reverse pattern in the right visual cortex, extending previous findings in adults to teenagers. In marked contrast, 7- and 10-year-olds did not show this dissociable neural pattern. Furthermore, the contrast of familiar stimuli (letters or numbers) versus unfamiliar ones (false fonts) showed stark ERP differences between the younger (7- and 10-year-olds) and the older (15-year-olds and adults) participants. These results suggest that both coarse (familiar versus unfamiliar) and fine (letters versus numbers) tuning for letters and numbers continue throughout childhood and early adolescence, demonstrating a profound impact of uniquely human cultural inventions on visual cognition and its development. © 2017 John Wiley & Sons Ltd.

  12. Priming with real motion biases visual cortical response to bistable apparent motion

    PubMed Central

    Zhang, Qing-fang; Wen, Yunqing; Zhang, Deng; She, Liang; Wu, Jian-young; Dan, Yang; Poo, Mu-ming

    2012-01-01

    Apparent motion quartet is an ambiguous stimulus that elicits bistable perception, with the perceived motion alternating between two orthogonal paths. In human psychophysical experiments, the probability of perceiving motion in each path is greatly enhanced by a brief exposure to real motion along that path. To examine the neural mechanism underlying this priming effect, we used voltage-sensitive dye (VSD) imaging to measure the spatiotemporal activity in the primary visual cortex (V1) of awake mice. We found that a brief real motion stimulus transiently biased the cortical response to subsequent apparent motion toward the spatiotemporal pattern representing the real motion. Furthermore, intracellular recording from V1 neurons in anesthetized mice showed a similar increase in subthreshold depolarization in the neurons representing the path of real motion. Such short-term plasticity in early visual circuits may contribute to the priming effect in bistable visual perception. PMID:23188797

  13. Stem Cell Therapy for Treatment of Ocular Disorders

    PubMed Central

    Sivan, Padma Priya; Syed, Sakinah; Mok, Pooi-Ling; Higuchi, Akon; Murugan, Kadarkarai; Alarfaj, Abdullah A.; Munusamy, Murugan A.; Awang Hamat, Rukman; Umezawa, Akihiro; Kumar, Suresh

    2016-01-01

    Sustenance of visual function is the ultimate focus of ophthalmologists. Failure of complete recovery of visual function and complications that follow conventional treatments have shifted search to a new form of therapy using stem cells. Stem cell progenitors play a major role in replenishing degenerated cells despite being present in low quantity and quiescence in our body. Unlike other tissues and cells, regeneration of new optic cells responsible for visual function is rarely observed. Understanding the transcription factors and genes responsible for optic cells development will assist scientists in formulating a strategy to activate and direct stem cells renewal and differentiation. We review the processes of human eye development and address the strategies that have been exploited in an effort to regain visual function in the preclinical and clinical state. The update of clinical findings of patients receiving stem cell treatment is also presented. PMID:27293447

  14. Visualizing Human Migration Trhough Space and Time

    NASA Astrophysics Data System (ADS)

    Zambotti, G.; Guan, W.; Gest, J.

    2015-07-01

    Human migration has been an important activity in human societies since antiquity. Since 1890, approximately three percent of the world's population has lived outside of their country of origin. As globalization intensifies in the modern era, human migration persists even as governments seek to more stringently regulate flows. Understanding this phenomenon, its causes, processes and impacts often starts from measuring and visualizing its spatiotemporal patterns. This study builds a generic online platform for users to interactively visualize human migration through space and time. This entails quickly ingesting human migration data in plain text or tabular format; matching the records with pre-established geographic features such as administrative polygons; symbolizing the migration flow by circular arcs of varying color and weight based on the flow attributes; connecting the centroids of the origin and destination polygons; and allowing the user to select either an origin or a destination feature to display all flows in or out of that feature through time. The method was first developed using ArcGIS Server for world-wide cross-country migration, and later applied to visualizing domestic migration patterns within China between provinces, and between states in the United States, all through multiple years. The technical challenges of this study include simplifying the shapes of features to enhance user interaction, rendering performance and application scalability; enabling the temporal renderers to provide time-based rendering of features and the flow among them; and developing a responsive web design (RWD) application to provide an optimal viewing experience. The platform is available online for the public to use, and the methodology is easily adoptable to visualizing any flow, not only human migration but also the flow of goods, capital, disease, ideology, etc., between multiple origins and destinations across space and time.

  15. Intelligent Entity Behavior Within Synthetic Environments. Chapter 3

    NASA Technical Reports Server (NTRS)

    Kruk, R. V.; Howells, P. B.; Siksik, D. N.

    2007-01-01

    This paper describes some elements in the development of realistic performance and behavior in the synthetic entities (players) which support Modeling and Simulation (M&S) applications, particularly military training. Modern human-in-the-loop (virtual) training systems incorporate sophisticated synthetic environments, which provide: 1. The operational environment, including, for example, terrain databases; 2. Physical entity parameters which define performance in engineered systems, such as aircraft aerodynamics; 3. Platform/system characteristics such as acoustic, IR and radar signatures; 4. Behavioral entity parameters which define interactive performance, including knowledge/reasoning about terrain, tactics; and, 5. Doctrine, which combines knowledge and tactics into behavior rule sets. The resolution and fidelity of these model/database elements can vary substantially, but as synthetic environments are designed to be compose able, attributes may easily be added (e.g., adding a new radar to an aircraft) or enhanced (e.g. Amending or replacing missile seeker head/ Electronic Counter Measures (ECM) models to improve the realism of their interaction). To a human in the loop with synthetic entities, their observed veridicality is assessed via engagement responses (e.g. effect of countermeasures upon a closing missile), as seen on systems displays, and visual (image) behavior. The realism of visual models in a simulation (level of detail as well as motion fidelity) remains a challenge in realistic articulation of elements such as vehicle antennae and turrets, or, with human figures; posture, joint articulation, response to uneven ground. Currently the adequacy of visual representation is more dependant upon the quality and resolution of the physical models driving those entities than graphics processing power per Se. Synthetic entities in M&S applications traditionally have represented engineered systems (e.g. aircraft) with human-in-the-loop performance characteristics (e.g. visual acuity) included in the system behavioral specification. As well, performance affecting human parameters such as experience level, fatigue and stress are coming into wider use (via AI approaches) to incorporate more uncertainty as to response type as well as performance (e.g. Where an opposing entity might go and what it might do, as well as how well it might perform).

  16. The social environment influences the behavioural responses of beef cattle to handling.

    PubMed

    Grignard; Boissy; Boivin; Garel; Le Neindre P

    2000-05-05

    In cattle, a gregarious species, the social group influences individual stress responses to fear-eliciting situations. As handling can be stressful for farm animals, it can be hypothesised that social partners modify individual responses to handling. The present experiment investigated the effect of the presence or absence of social partners on behavioural reactions of beef calves in a handling test. At the age of 10 months, 38 calves from two breeds (Salers and Limousine) were individually subjected to the docility test, once while in visual contact with four familiar peers, and once in the absence of peers, following a crossover design. The docility test procedure included physical separation from peers (30 s; period 1), exposition to a stationary human (30 s; period 2), and handling by human (30 s-2.5 min, according to the success in handling; period 3). In absence of human (period 1), calves in visual contact with their peers spent more time motionless than when peers were totally absent (P<0.001). The social environment also influenced the duration of handling (period 3); the human required more time to successfully handle calves when peers were present (P<0.05). In conclusion, the presence of peers affects individual calves' reactions to the docility test.

  17. The nature of visual self-recognition.

    PubMed

    Suddendorf, Thomas; Butler, David L

    2013-03-01

    Visual self-recognition is often controversially cited as an indicator of self-awareness and assessed with the mirror-mark test. Great apes and humans, unlike small apes and monkeys, have repeatedly passed mirror tests, suggesting that the underlying brain processes are homologous and evolved 14-18 million years ago. However, neuroscientific, developmental, and clinical dissociations show that the medium used for self-recognition (mirror vs photograph vs video) significantly alters behavioral and brain responses, likely due to perceptual differences among the different media and prior experience. On the basis of this evidence and evolutionary considerations, we argue that the visual self-recognition skills evident in humans and great apes are a byproduct of a general capacity to collate representations, and need not index other aspects of self-awareness. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Dependence of chromatic responses in V1 on visual field eccentricity and spatial frequency: an fMRI study.

    PubMed

    D'Souza, Dany V; Auer, Tibor; Frahm, Jens; Strasburger, Hans; Lee, Barry B

    2016-03-01

    Psychophysical sensitivity to red-green chromatic modulation decreases with visual eccentricity, compared to sensitivity to luminance modulation, even after appropriate stimulus scaling. This is likely to occur at a central, rather than a retinal, site. Blood-oxygenation-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) responses to stimuli designed to separately stimulate different afferent channels' [red-green, luminance, and short-wavelength (S)-cone] circular gratings were recorded as a function of visual eccentricity (±10  deg) and spatial frequency (SF) in human primary visual cortex (V1) and further visual areas (V2v, V3v). In V1, the SF tuning of BOLD fMRI responses became coarser with eccentricity. For red-green and luminance gratings, similar SF tuning curves were found at all eccentricities. The pattern for S-cone modulation differed, with SF tuning changing more slowly with eccentricity than for the other two modalities. This may be due to the different retinal distribution with eccentricity of this receptor type. A similar pattern held in V2v and V3v. This would suggest that transformation or spatial filtering of the chromatic (red-green) signal occurs beyond these areas.

  19. Effect of levodopa and carbidopa in human amblyopia.

    PubMed

    Pandey, P K; Chaudhuri, Zia; Kumar, Maneesh; Satyabala, K; Sharma, Pankaj

    2002-01-01

    To assess the role of continuous therapy for 3 weeks with levodopa and carbidopa in the management of human amblyopia in children and adults. There were 88 amblyopic eyes of 82 subjects included in this double masked randomized prospective clinical trial. Levodopa and carbidopa combination in 2 different dosage schedules were given to both adults and children. The response was monitored of the improvement in visual acuity, contrast sensitivity, and visually evoked potentials. Patients receiving higher dosages of levodopa and carbidopa in both adults and children showed a better response to treatment. However, the effect did not last beyond 9 weeks of stopping treatment. Though levodopa and carbidopa therapy may not be able to ameliorate amblyopia on its own on a long-term basis, it may be considered nonetheless to be an important adjunct to conventional therapy because it may improve patient compliance for occlusion by improving visual acuity in the amblyopic eye. Thus, it offers promise of improving the functional outcome in these cases. However, longer follow-up trials are needed to substantiate these conclusions.

  20. Categorical clustering of the neural representation of color.

    PubMed

    Brouwer, Gijs Joost; Heeger, David J

    2013-09-25

    Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons.

  1. Hunger-Dependent Enhancement of Food Cue Responses in Mouse Postrhinal Cortex and Lateral Amygdala.

    PubMed

    Burgess, Christian R; Ramesh, Rohan N; Sugden, Arthur U; Levandowski, Kirsten M; Minnig, Margaret A; Fenselau, Henning; Lowell, Bradford B; Andermann, Mark L

    2016-09-07

    The needs of the body can direct behavioral and neural processing toward motivationally relevant sensory cues. For example, human imaging studies have consistently found specific cortical areas with biased responses to food-associated visual cues in hungry subjects, but not in sated subjects. To obtain a cellular-level understanding of these hunger-dependent cortical response biases, we performed chronic two-photon calcium imaging in postrhinal association cortex (POR) and primary visual cortex (V1) of behaving mice. As in humans, neurons in mouse POR, but not V1, exhibited biases toward food-associated cues that were abolished by satiety. This emergent bias was mirrored by the innervation pattern of amygdalo-cortical feedback axons. Strikingly, these axons exhibited even stronger food cue biases and sensitivity to hunger state and trial history. These findings highlight a direct pathway by which the lateral amygdala may contribute to state-dependent cortical processing of motivationally relevant sensory cues. Published by Elsevier Inc.

  2. Categorical Clustering of the Neural Representation of Color

    PubMed Central

    Heeger, David J.

    2013-01-01

    Cortical activity was measured with functional magnetic resonance imaging (fMRI) while human subjects viewed 12 stimulus colors and performed either a color-naming or diverted attention task. A forward model was used to extract lower dimensional neural color spaces from the high-dimensional fMRI responses. The neural color spaces in two visual areas, human ventral V4 (V4v) and VO1, exhibited clustering (greater similarity between activity patterns evoked by stimulus colors within a perceptual category, compared to between-category colors) for the color-naming task, but not for the diverted attention task. Response amplitudes and signal-to-noise ratios were higher in most visual cortical areas for color naming compared to diverted attention. But only in V4v and VO1 did the cortical representation of color change to a categorical color space. A model is presented that induces such a categorical representation by changing the response gains of subpopulations of color-selective neurons. PMID:24068814

  3. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  4. Prestimulus neural oscillations inhibit visual perception via modulation of response gain.

    PubMed

    Chaumon, Maximilien; Busch, Niko A

    2014-11-01

    The ongoing state of the brain radically affects how it processes sensory information. How does this ongoing brain activity interact with the processing of external stimuli? Spontaneous oscillations in the alpha range are thought to inhibit sensory processing, but little is known about the psychophysical mechanisms of this inhibition. We recorded ongoing brain activity with EEG while human observers performed a visual detection task with stimuli of different contrast intensities. To move beyond qualitative description, we formally compared psychometric functions obtained under different levels of ongoing alpha power and evaluated the inhibitory effect of ongoing alpha oscillations in terms of contrast or response gain models. This procedure opens the way to understanding the actual functional mechanisms by which ongoing brain activity affects visual performance. We found that strong prestimulus occipital alpha oscillations-but not more anterior mu oscillations-reduce performance most strongly for stimuli of the highest intensities tested. This inhibitory effect is best explained by a divisive reduction of response gain. Ongoing occipital alpha oscillations thus reflect changes in the visual system's input/output transformation that are independent of the sensory input to the system. They selectively scale the system's response, rather than change its sensitivity to sensory information.

  5. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man

    PubMed Central

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295

  6. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man.

    PubMed

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.

  7. Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex

    PubMed Central

    Jeong, Su Keun

    2016-01-01

    The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642

  8. Subconscious Visual Cues during Movement Execution Allow Correct Online Choice Reactions

    PubMed Central

    Leukel, Christian; Lundbye-Jensen, Jesper; Christensen, Mark Schram; Gollhofer, Albert; Nielsen, Jens Bo; Taube, Wolfgang

    2012-01-01

    Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task. PMID:23049749

  9. Food's visually perceived fat content affects discrimination speed in an orthogonal spatial task.

    PubMed

    Harrar, Vanessa; Toepel, Ulrike; Murray, Micah M; Spence, Charles

    2011-10-01

    Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.

  10. Electrocortical amplification for emotionally arousing natural scenes: The contribution of luminance and chromatic visual channels

    PubMed Central

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas

    2015-01-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949

  11. Electrocortical amplification for emotionally arousing natural scenes: the contribution of luminance and chromatic visual channels.

    PubMed

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas

    2015-03-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. A Computational Model of Afterimage Rotation in the Peripheral Drift Illusion Based on Retinal ON/OFF Responses

    PubMed Central

    Hayashi, Yuichiro; Ishii, Shin; Urakubo, Hidetoshi

    2014-01-01

    Human observers perceive illusory rotations after the disappearance of circularly repeating patches containing dark-to-light luminance. This afterimage rotation is a very powerful phenomenon, but little is known about the mechanisms underlying it. Here, we use a computational model to show that the afterimage rotation can be explained by a combination of fast light adaptation and the physiological architecture of the early visual system, consisting of ON- and OFF-type visual pathways. In this retinal ON/OFF model, the afterimage rotation appeared as a rotation of focus lines of retinal ON/OFF responses. Focus lines rotated clockwise on a light background, but counterclockwise on a dark background. These findings were consistent with the results of psychophysical experiments, which were also performed by us. Additionally, the velocity of the afterimage rotation was comparable with that observed in our psychophysical experiments. These results suggest that the early visual system (including the retina) is responsible for the generation of the afterimage rotation, and that this illusory rotation may be systematically misinterpreted by our high-level visual system. PMID:25517906

  13. Vocal and visual stimulation, congruence and lateralization affect brain oscillations in interspecies emotional positive and negative interactions.

    PubMed

    Balconi, Michela; Vanutelli, Maria Elide

    2016-01-01

    The present research explored the effect of cross-modal integration of emotional cues (auditory and visual (AV)) compared with only visual (V) emotional cues in observing interspecies interactions. The brain activity was monitored when subjects processed AV and V situations, which represented an emotional (positive or negative), interspecies (human-animal) interaction. Congruence (emotionally congruous or incongruous visual and auditory patterns) was also modulated. electroencephalography brain oscillations (from delta to beta) were analyzed and the cortical source localization (by standardized Low Resolution Brain Electromagnetic Tomography) was applied to the data. Frequency band (mainly low-frequency delta and theta) showed a significant brain activity increasing in response to negative compared to positive interactions within the right hemisphere. Moreover, differences were found based on stimulation type, with an increased effect for AV compared with V. Finally, delta band supported a lateralized right dorsolateral prefrontal cortex (DLPFC) activity in response to negative and incongruous interspecies interactions, mainly for AV. The contribution of cross-modality, congruence (incongruous patterns), and lateralization (right DLPFC) in response to interspecies emotional interactions was discussed at light of a "negative lateralized effect."

  14. Asymmetric top-down modulation of ascending visual pathways in pigeons.

    PubMed

    Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur

    2016-03-01

    Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.

  15. Early detection and visualization of human adenovirus serotype 5-viral vectors carrying foot-and-mouth disease virus or luciferase transgenes in cell lines and bovine tissues

    USDA-ARS?s Scientific Manuscript database

    Recombinant replication-defective human adenovirus type 5 (Ad5) vaccines containing capsid-coding regions from foot-and-mouth disease virus (FMDV) have been demonstrated to induce effective immune responses and provide homologous protective immunity against FMDV in cattle. However, basic mechanisms ...

  16. The human oculomotor response to simultaneous visual and physical movements at two different frequencies

    NASA Technical Reports Server (NTRS)

    Wall, C.; Assad, A.; Aharon, G.; Dimitri, P. S.; Harris, L. R.

    2001-01-01

    In order to investigate interactions in the visual and vestibular systems' oculomotor response to linear movement, we developed a two-frequency stimulation technique. Thirteen subjects lay on their backs and were oscillated sinusoidally along their z-axes at between 0.31 and 0.81 Hz. During the oscillation subjects viewed a large, high-contrast, visual pattern oscillating in the same direction as the physical motion but at a different, non-harmonically related frequency. The evoked eye movements were measured by video-oculography and spectrally analysed. We found significant signal level at the sum and difference frequencies as well as at other frequencies not present in either stimulus. The emergence of new frequencies indicates non-linear processing consistent with an agreement-detector system that have previously proposed.

  17. Attentional load modulates responses of human primary visual cortex to invisible stimuli.

    PubMed

    Bahrami, Bahador; Lavie, Nilli; Rees, Geraint

    2007-03-20

    Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.

  18. Neurochemical responses to chromatic and achromatic stimuli in the human visual cortex.

    PubMed

    Bednařík, Petr; Tkáč, Ivan; Giove, Federico; Eberly, Lynn E; Deelchand, Dinesh K; Barreto, Felipe R; Mangia, Silvia

    2018-02-01

    In the present study, we aimed at determining the metabolic responses of the human visual cortex during the presentation of chromatic and achromatic stimuli, known to preferentially activate two separate clusters of neuronal populations (called "blobs" and "interblobs") with distinct sensitivity to color or luminance features. Since blobs and interblobs have different cytochrome-oxidase (COX) content and micro-vascularization level (i.e., different capacities for glucose oxidation), different functional metabolic responses during chromatic vs. achromatic stimuli may be expected. The stimuli were optimized to evoke a similar load of neuronal activation as measured by the bold oxygenation level dependent (BOLD) contrast. Metabolic responses were assessed using functional 1 H MRS at 7 T in 12 subjects. During both chromatic and achromatic stimuli, we observed the typical increases in glutamate and lactate concentration, and decreases in aspartate and glucose concentration, that are indicative of increased glucose oxidation. However, within the detection sensitivity limits, we did not observe any difference between metabolic responses elicited by chromatic and achromatic stimuli. We conclude that the higher energy demands of activated blobs and interblobs are supported by similar increases in oxidative metabolism despite the different capacities of these neuronal populations.

  19. Intracranial Cortical Responses during Visual–Tactile Integration in Humans

    PubMed Central

    Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric

    2014-01-01

    Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279

  20. Sensitive periods for the functional specialization of the neural system for human face processing.

    PubMed

    Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide

    2013-10-15

    The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.

  1. Learning rational temporal eye movement strategies.

    PubMed

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  2. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    PubMed Central

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  3. Early suppression effect in human primary visual cortex during Kanizsa illusion processing: A magnetoencephalographic evidence.

    PubMed

    Chernyshev, Boris V; Pronko, Platon K; Stroganova, Tatiana A

    2016-01-01

    Detection of illusory contours (ICs) such as Kanizsa figures is known to depend primarily upon the lateral occipital complex. Yet there is no universal agreement on the role of the primary visual cortex in this process; some existing evidence hints that an early stage of the visual response in V1 may involve relative suppression to Kanizsa figures compared with controls. Iso-oriented luminance borders, which are responsible for Kanizsa illusion, may evoke surround suppression in V1 and adjacent areas leading to the reduction in the initial response to Kanizsa figures. We attempted to test the existence, as well as to find localization and timing of the early suppression effect produced by Kanizsa figures in adult nonclinical human participants. We used two sizes of visual stimuli (4.5 and 9.0°) in order to probe the effect at two different levels of eccentricity; the stimuli were presented centrally in passive viewing conditions. We recorded magnetoencephalogram, which is more sensitive than electroencephalogram to activity originating from V1 and V2 areas. We restricted our analysis to the medial occipital area and the occipital pole, and to a 40-120 ms time window after the stimulus onset. By applying threshold-free cluster enhancement technique in combination with permutation statistics, we were able to detect the inverted IC effect-a relative suppression of the response to the Kanizsa figures compared with the control stimuli. The current finding is highly compatible with the explanation involving surround suppression evoked by iso-oriented collinear borders. The effect may be related to the principle of sparse coding, according to which V1 suppresses representations of inner parts of collinear assemblies as being informationally redundant. Such a mechanism is likely to be an important preliminary step preceding object contour detection.

  4. Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.

    PubMed

    Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J

    2017-02-01

    Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.

  5. Elevating Endogenous GABA Levels with GAT-1 Blockade Modulates Evoked but Not Induced Responses in Human Visual Cortex

    PubMed Central

    Muthukumaraswamy, Suresh D; Myers, Jim F M; Wilson, Sue J; Nutt, David J; Hamandi, Khalid; Lingford-Hughes, Anne; Singh, Krish D

    2013-01-01

    The electroencephalographic/magnetoencephalographic (EEG/MEG) signal is generated primarily by the summation of the postsynaptic currents of cortical principal cells. At a microcircuit level, these glutamatergic principal cells are reciprocally connected to GABAergic interneurons. Here we investigated the relative sensitivity of visual evoked and induced responses to altered levels of endogenous GABAergic inhibition. To do this, we pharmacologically manipulated the GABA system using tiagabine, which blocks the synaptic GABA transporter 1, and so increases endogenous GABA levels. In a single-blinded and placebo-controlled crossover study of 15 healthy participants, we administered either 15 mg of tiagabine or a placebo. We recorded whole-head MEG, while participants viewed a visual grating stimulus, before, 1, 3 and 5 h post tiagabine ingestion. Using beamformer source localization, we reconstructed responses from early visual cortices. Our results showed no change in either stimulus-induced gamma-band amplitude increases or stimulus-induced alpha amplitude decreases. However, the same data showed a 45% reduction in the evoked response component at ∼80 ms. These data demonstrate that, in early visual cortex the evoked response shows a greater sensitivity compared with induced oscillations to pharmacologically increased endogenous GABA levels. We suggest that previous studies correlating GABA concentrations as measured by magnetic resonance spectroscopy to gamma oscillation frequency may reflect underlying variations such as interneuron/inhibitory synapse density rather than functional synaptic GABA concentrations. PMID:23361120

  6. Task alters category representations in prefrontal but not high-level visual cortex.

    PubMed

    Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit

    2017-07-15

    A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  8. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Benolken, Martha S.

    1993-01-01

    The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  9. Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".

    PubMed

    Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David

    2013-01-23

    Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.

  10. Properties of V1 Neurons Tuned to Conjunctions of Visual Features: Application of the V1 Saliency Hypothesis to Visual Search behavior

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target. PMID:22719829

  11. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    PubMed

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  12. The uncertain response in humans and animals

    NASA Technical Reports Server (NTRS)

    Smith, J. D.; Shields, W. E.; Schull, J.; Washburn, D. A.; Rumbaugh, D. M. (Principal Investigator)

    1997-01-01

    There has been no comparative psychological study of uncertainty processes. Accordingly, the present experiments asked whether animals, like humans, escape adaptively when they are uncertain. Human and animal observers were given two primary responses in a visual discrimination task, and the opportunity to escape from some trials into easier ones. In one psychophysical task (using a threshold paradigm), humans escaped selectively the difficult trials that left them uncertain of the stimulus. Two rhesus monkeys (Macaca mulatta) also showed this pattern. In a second psychophysical task (using the method of constant stimuli), some humans showed this pattern but one escaped infrequently and nonoptimally. Monkeys showed equivalent individual differences. The data suggest that escapes by humans and monkeys are interesting cognitive analogs and may reflect controlled decisional processes prompted by the perceptual ambiguity at threshold.

  13. Experimental evidence for improved neuroimaging interpretation using three-dimensional graphic models.

    PubMed

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more precisely than classical cross-sectional images based on a two dimensional (2D) approach. Eighty participants were assigned to each experimental condition: 2D cross-sectional visualization vs. 3D volumetric visualization. Both groups were matched for age, gender, visual-spatial ability, and previous knowledge of neuroanatomy. Accuracy in identifying brain structures, execution time, and level of confidence in the response were taken as outcome measures. Moreover, interactive effects between the experimental conditions (2D vs. 3D) and factors such as level of competence (novice vs. expert), image modality (morphological and functional), and difficulty of the structures were analyzed. The percentage of correct answers (hit rate) and level of confidence in responses were significantly higher in the 3D visualization condition than in the 2D. In addition, the response time was significantly lower for the 3D visualization condition in comparison with the 2D. The interaction between the experimental condition (2D vs. 3D) and difficulty was significant, and the 3D condition facilitated the location of difficult images more than the 2D condition. 3D volumetric visualization helps to identify brain structures such as the hippocampus and amygdala, more accurately and rapidly than conventional 2D visualization. This paper discusses the implications of these results with regards to the learning process involved in neuroimaging interpretation. Copyright © 2012 American Association of Anatomists.

  14. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  15. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.

    2000-01-01

    Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

  16. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.

    PubMed

    Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.

  17. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  19. Categorical discrimination of human body parts by magnetoencephalography

    PubMed Central

    Nakamura, Misaki; Yanagisawa, Takufumi; Okamura, Yumiko; Fukuma, Ryohei; Hirata, Masayuki; Araki, Toshihiko; Kamitani, Yukiyasu; Yorifuji, Shiro

    2015-01-01

    Humans recognize body parts in categories. Previous studies have shown that responses in the fusiform body area (FBA) and extrastriate body area (EBA) are evoked by the perception of the human body, when presented either as whole or as isolated parts. These responses occur approximately 190 ms after body images are visualized. The extent to which body-sensitive responses show specificity for different body part categories remains to be largely clarified. We used a decoding method to quantify neural responses associated with the perception of different categories of body parts. Nine subjects underwent measurements of their brain activities by magnetoencephalography (MEG) while viewing 14 images of feet, hands, mouths, and objects. We decoded categories of the presented images from the MEG signals using a support vector machine (SVM) and calculated their accuracy by 10-fold cross-validation. For each subject, a response that appeared to be a body-sensitive response was observed and the MEG signals corresponding to the three types of body categories were classified based on the signals in the occipitotemporal cortex. The accuracy in decoding body-part categories (with a peak at approximately 48%) was above chance (33.3%) and significantly higher than that for random categories. According to the time course and location, the responses are suggested to be body-sensitive and to include information regarding the body-part category. Finally, this non-invasive method can decode category information of a visual object with high temporal and spatial resolution and this result may have a significant impact in the field of brain–machine interface research. PMID:26582986

  20. Categorical discrimination of human body parts by magnetoencephalography.

    PubMed

    Nakamura, Misaki; Yanagisawa, Takufumi; Okamura, Yumiko; Fukuma, Ryohei; Hirata, Masayuki; Araki, Toshihiko; Kamitani, Yukiyasu; Yorifuji, Shiro

    2015-01-01

    Humans recognize body parts in categories. Previous studies have shown that responses in the fusiform body area (FBA) and extrastriate body area (EBA) are evoked by the perception of the human body, when presented either as whole or as isolated parts. These responses occur approximately 190 ms after body images are visualized. The extent to which body-sensitive responses show specificity for different body part categories remains to be largely clarified. We used a decoding method to quantify neural responses associated with the perception of different categories of body parts. Nine subjects underwent measurements of their brain activities by magnetoencephalography (MEG) while viewing 14 images of feet, hands, mouths, and objects. We decoded categories of the presented images from the MEG signals using a support vector machine (SVM) and calculated their accuracy by 10-fold cross-validation. For each subject, a response that appeared to be a body-sensitive response was observed and the MEG signals corresponding to the three types of body categories were classified based on the signals in the occipitotemporal cortex. The accuracy in decoding body-part categories (with a peak at approximately 48%) was above chance (33.3%) and significantly higher than that for random categories. According to the time course and location, the responses are suggested to be body-sensitive and to include information regarding the body-part category. Finally, this non-invasive method can decode category information of a visual object with high temporal and spatial resolution and this result may have a significant impact in the field of brain-machine interface research.

  1. Visual Aversive Learning Compromises Sensory Discrimination.

    PubMed

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.

  2. Improvement in visual search with practice: mapping learning-related changes in neurocognitive stages of processing.

    PubMed

    Clark, Kait; Appelbaum, L Gregory; van den Berg, Berry; Mitroff, Stephen R; Woldorff, Marty G

    2015-04-01

    Practice can improve performance on visual search tasks; the neural mechanisms underlying such improvements, however, are not clear. Response time typically shortens with practice, but which components of the stimulus-response processing chain facilitate this behavioral change? Improved search performance could result from enhancements in various cognitive processing stages, including (1) sensory processing, (2) attentional allocation, (3) target discrimination, (4) motor-response preparation, and/or (5) response execution. We measured event-related potentials (ERPs) as human participants completed a five-day visual-search protocol in which they reported the orientation of a color popout target within an array of ellipses. We assessed changes in behavioral performance and in ERP components associated with various stages of processing. After practice, response time decreased in all participants (while accuracy remained consistent), and electrophysiological measures revealed modulation of several ERP components. First, amplitudes of the early sensory-evoked N1 component at 150 ms increased bilaterally, indicating enhanced visual sensory processing of the array. Second, the negative-polarity posterior-contralateral component (N2pc, 170-250 ms) was earlier and larger, demonstrating enhanced attentional orienting. Third, the amplitude of the sustained posterior contralateral negativity component (SPCN, 300-400 ms) decreased, indicating facilitated target discrimination. Finally, faster motor-response preparation and execution were observed after practice, as indicated by latency changes in both the stimulus-locked and response-locked lateralized readiness potentials (LRPs). These electrophysiological results delineate the functional plasticity in key mechanisms underlying visual search with high temporal resolution and illustrate how practice influences various cognitive and neural processing stages leading to enhanced behavioral performance. Copyright © 2015 the authors 0270-6474/15/355351-09$15.00/0.

  3. Two different mechanisms support selective attention at different phases of training.

    PubMed

    Itthipuripat, Sirawaj; Cha, Kexin; Byers, Anna; Serences, John T

    2017-06-01

    Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes.

  4. A visual horizon affects steering responses during flight in fruit flies.

    PubMed

    Caballero, Jorge; Mazo, Chantell; Rodriguez-Pinto, Ivan; Theobald, Jamie C

    2015-09-01

    To navigate well through three-dimensional environments, animals must in some way gauge the distances to objects and features around them. Humans use a variety of visual cues to do this, but insects, with their small size and rigid eyes, are constrained to a more limited range of possible depth cues. For example, insects attend to relative image motion when they move, but cannot change the optical power of their eyes to estimate distance. On clear days, the horizon is one of the most salient visual features in nature, offering clues about orientation, altitude and, for humans, distance to objects. We set out to determine whether flying fruit flies treat moving features as farther off when they are near the horizon. Tethered flies respond strongly to moving images they perceive as close. We measured the strength of steering responses while independently varying the elevation of moving stimuli and the elevation of a virtual horizon. We found responses to vertical bars are increased by negative elevations of their bases relative to the horizon, closely correlated with the inverse of apparent distance. In other words, a bar that dips far below the horizon elicits a strong response, consistent with using the horizon as a depth cue. Wide-field motion also had an enhanced effect below the horizon, but this was only prevalent when flies were additionally motivated with hunger. These responses may help flies tune behaviors to nearby objects and features when they are too far off for motion parallax. © 2015. Published by The Company of Biologists Ltd.

  5. Two different mechanisms support selective attention at different phases of training

    PubMed Central

    Cha, Kexin; Byers, Anna; Serences, John T.

    2017-01-01

    Selective attention supports the prioritized processing of relevant sensory information to facilitate goal-directed behavior. Studies in human subjects demonstrate that attentional gain of cortical responses can sufficiently account for attention-related improvements in behavior. On the other hand, studies using highly trained nonhuman primates suggest that reductions in neural noise can better explain attentional facilitation of behavior. Given the importance of selective information processing in nearly all domains of cognition, we sought to reconcile these competing accounts by testing the hypothesis that extensive behavioral training alters the neural mechanisms that support selective attention. We tested this hypothesis using electroencephalography (EEG) to measure stimulus-evoked visual responses from human subjects while they performed a selective spatial attention task over the course of ~1 month. Early in training, spatial attention led to an increase in the gain of stimulus-evoked visual responses. Gain was apparent within ~100 ms of stimulus onset, and a quantitative model based on signal detection theory (SDT) successfully linked the magnitude of this gain modulation to attention-related improvements in behavior. However, after extensive training, this early attentional gain was eliminated even though there were still substantial attention-related improvements in behavior. Accordingly, the SDT-based model required noise reduction to account for the link between the stimulus-evoked visual responses and attentional modulations of behavior. These findings suggest that training can lead to fundamental changes in the way attention alters the early cortical responses that support selective information processing. Moreover, these data facilitate the translation of results across different species and across experimental procedures that employ different behavioral training regimes. PMID:28654635

  6. Innervation of the human cricopharyngeal muscle by the recurrent laryngeal nerve and external branch of the superior laryngeal nerve.

    PubMed

    Uludag, Mehmet; Aygun, Nurcihan; Isgor, Adnan

    2017-06-01

    The major component of the upper esophageal sphincter is the cricopharyngeal muscle (CPM). We assessed the contribution of the laryngeal nerves to motor innervation of the CPM. We performed an intraoperative electromyographic study of 27 patients. The recurrent laryngeal nerve (RLN), vagus nerve, external branch of the superior laryngeal nerve (EBSLN), and pharyngeal plexus (PP) were stimulated. Responses were evaluated by visual observation of CPM contractions and electromyographic examination via insertion of needle electrodes into the CPM. In total, 46 CPMs (24 right, 22 left) were evaluated. PP stimulation produced both positive visual contractions and electromyographic (EMG) responses in 42 CPMs (2080 ± 1583 μV). EBSLN stimulation produced visual contractions of 28 CPMs and positive EMG responses in 35 CPMs (686 ± 630 μV). Stimulation of 45 RLNs produced visible contractions of 37 CPMs and positive EMG activity in 41 CPMs (337 ± 280 μV). Stimulation of 42 vagal nerves resulted in visible contractions of 36 CPMs and positive EMG responses in 37 CPMs (292 ± 229 μV). Motor activity was noted in 32 CPMs by both RLN and EBSLN stimulation, 9 CPMs by RLN stimulation, and 3 CPMs by EBSLN stimulation; 2 CPMs exhibited no response. This is the first study to show that the EBSLN contributes to motor innervation of the human CPM. The RLN, EBSLN, or both of the nerves innervate the 90, 75, and 70 % of the CPMs ipsilaterally, respectively.

  7. Responses of single cells in cat visual cortex to prolonged stimulus movement: neural correlates of visual aftereffects.

    PubMed

    Vautin, R G; Berkley, M A

    1977-09-01

    1. The activity of single cortical cells in area 17 of anesthetized and unanesthetized cats was recorded in response to prolonged stimulation with moving stimuli. 2. Under the appropriate conditions, all cells observed showed a progressive response decrement during the stimulation period, regardless of cell classification, i.e., simple, complex, or hypercomplex. 3. The observed response decrement was shown to be largely cortical in origin and could be adequately described with an exponential function of the form R = Rf +(R1-Rf)e-t/T. Time constants derived from such calculations yielded values ranging from 1.92 to 12.45 s under conditions of optimal-stimulation. 4. Most cells showed poststimulation effects, usually a brief period of reduced responsiveness that recovered exponentially. Recovery was essentially complete in about 5-35 s. 5. The degree to which stimuli were effective at inducing response was shown to have significant effects on the magnitude of the response decrement. 6. Several cells showed neural patterns of response and recovery that suggested the operation of intracortical inhibitory mechanisms. 7. A simple two-process model that adequately describes the behavior of all the studied cells is presented. 8. Because the properties of the cells studied correlate well with human psychophysical measures of contour and movement adaptation and recovery, a causal relationship to similar neural mechanisms in humans is suggested.

  8. Determinants of motion response anisotropies in human early visual cortex: the role of configuration and eccentricity.

    PubMed

    Maloney, Ryan T; Watson, Tamara L; Clifford, Colin W G

    2014-10-15

    Anisotropies in the cortical representation of various stimulus parameters can reveal the fundamental mechanisms by which sensory properties are analysed and coded by the brain. One example is the preference for motion radial to the point of fixation (i.e. centripetal or centrifugal) exhibited in mammalian visual cortex. In two experiments, this study used functional magnetic resonance imaging (fMRI) to explore the determinants of these radial biases for motion in functionally-defined areas of human early visual cortex, and in particular their dependence upon eccentricity which has been indicated in recent reports. In one experiment, the cortical response to wide-field random dot kinematograms forming 16 different complex motion patterns (including centrifugal, centripetal, rotational and spiral motion) was measured. The response was analysed according to preferred eccentricity within four different eccentricity ranges. Response anisotropies were characterised by enhanced activity for centripetal or centrifugal patterns that changed systematically with eccentricity in visual areas V1-V3 and hV4 (but not V3A/B or V5/MT+). Responses evolved from a preference for centrifugal over centripetal patterns close to the fovea, to a preference for centripetal over centrifugal at the most peripheral region stimulated, in agreement with previous work. These effects were strongest in V2 and V3. In a second experiment, the stimuli were restricted to within narrow annuli either close to the fovea (0.75-1.88°) or further in the periphery (4.82-6.28°), in a way that preserved the local motion information available in the first experiment. In this configuration a preference for radial motion (centripetal or centrifugal) persisted but the dependence upon eccentricity disappeared. Again this was clearest in V2 and V3. A novel interpretation of the dependence upon eccentricity of motion anisotropies in early visual cortex is offered that takes into account the spatiotemporal "predictability" of the moving pattern. Such stimulus predictability, and its relationship to models of predictive coding, has found considerable support in recent years in accounting for a number of other perceptual and neural phenomena. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review.

    PubMed

    Spering, Miriam; Montagnini, Anna

    2011-04-22

    Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Spatial scale and distribution of neurovascular signals underlying decoding of orientation and eye of origin from fMRI data

    PubMed Central

    Harrison, Charlotte; Jackson, Jade; Oh, Seung-Mock; Zeringyte, Vaida

    2016-01-01

    Multivariate pattern analysis of functional magnetic resonance imaging (fMRI) data is widely used, yet the spatial scales and origin of neurovascular signals underlying such analyses remain unclear. We compared decoding performance for stimulus orientation and eye of origin from fMRI measurements in human visual cortex with predictions based on the columnar organization of each feature and estimated the spatial scales of patterns driving decoding. Both orientation and eye of origin could be decoded significantly above chance in early visual areas (V1–V3). Contrary to predictions based on a columnar origin of response biases, decoding performance for eye of origin in V2 and V3 was not significantly lower than that in V1, nor did decoding performance for orientation and eye of origin differ significantly. Instead, response biases for both features showed large-scale organization, evident as a radial bias for orientation, and a nasotemporal bias for eye preference. To determine whether these patterns could drive classification, we quantified the effect on classification performance of binning voxels according to visual field position. Consistent with large-scale biases driving classification, binning by polar angle yielded significantly better decoding performance for orientation than random binning in V1–V3. Similarly, binning by hemifield significantly improved decoding performance for eye of origin. Patterns of orientation and eye preference bias in V2 and V3 showed a substantial degree of spatial correlation with the corresponding patterns in V1, suggesting that response biases in these areas originate in V1. Together, these findings indicate that multivariate classification results need not reflect the underlying columnar organization of neuronal response selectivities in early visual areas. NEW & NOTEWORTHY Large-scale response biases can account for decoding of orientation and eye of origin in human early visual areas V1–V3. For eye of origin this pattern is a nasotemporal bias; for orientation it is a radial bias. Differences in decoding performance across areas and stimulus features are not well predicted by differences in columnar-scale organization of each feature. Large-scale biases in extrastriate areas are spatially correlated with those in V1, suggesting biases originate in primary visual cortex. PMID:27903637

  12. Neuroimaging of amblyopia and binocular vision: a review

    PubMed Central

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them. PMID:25147511

  13. Neuroimaging of amblyopia and binocular vision: a review.

    PubMed

    Joly, Olivier; Frankó, Edit

    2014-01-01

    Amblyopia is a cerebral visual impairment considered to derive from abnormal visual experience (e.g., strabismus, anisometropia). Amblyopia, first considered as a monocular disorder, is now often seen as a primarily binocular disorder resulting in more and more studies examining the binocular deficits in the patients. The neural mechanisms of amblyopia are not completely understood even though they have been investigated with electrophysiological recordings in animal models and more recently with neuroimaging techniques in humans. In this review, we summarize the current knowledge about the brain regions that underlie the visual deficits associated with amblyopia with a focus on binocular vision using functional magnetic resonance imaging. The first studies focused on abnormal responses in the primary and secondary visual areas whereas recent evidence shows that there are also deficits at higher levels of the visual pathways within the parieto-occipital and temporal cortices. These higher level areas are part of the cortical network involved in 3D vision from binocular cues. Therefore, reduced responses in these areas could be related to the impaired binocular vision in amblyopic patients. Promising new binocular treatments might at least partially correct the activation in these areas. Future neuroimaging experiments could help to characterize the brain response changes associated with these treatments and help devise them.

  14. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT ("face patches") did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. Significance statement: We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. Copyright © 2015 the authors 0270-6474/15/3513402-17$15.00/0.

  15. Perceptual learning modifies untrained pursuit eye movements.

    PubMed

    Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa

    2014-07-07

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.

  16. Perceptual learning modifies untrained pursuit eye movements

    PubMed Central

    Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa

    2014-01-01

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412

  17. Attention improves encoding of task-relevant features in the human visual cortex

    PubMed Central

    Jehee, Janneke F.M.; Brady, Devin K.; Tong, Frank

    2011-01-01

    When spatial attention is directed towards a particular stimulus, increased activity is commonly observed in corresponding locations of the visual cortex. Does this attentional increase in activity indicate improved processing of all features contained within the attended stimulus, or might spatial attention selectively enhance the features relevant to the observer’s task? We used fMRI decoding methods to measure the strength of orientation-selective activity patterns in the human visual cortex while subjects performed either an orientation or contrast discrimination task, involving one of two laterally presented gratings. Greater overall BOLD activation with spatial attention was observed in areas V1-V4 for both tasks. However, multivariate pattern analysis revealed that orientation-selective responses were enhanced by attention only when orientation was the task-relevant feature, and not when the grating’s contrast had to be attended. In a second experiment, observers discriminated the orientation or color of a specific lateral grating. Here, orientation-selective responses were enhanced in both tasks but color-selective responses were enhanced only when color was task-relevant. In both experiments, task-specific enhancement of feature-selective activity was not confined to the attended stimulus location, but instead spread to other locations in the visual field, suggesting the concurrent involvement of a global feature-based attentional mechanism. These results suggest that attention can be remarkably selective in its ability to enhance particular task-relevant features, and further reveal that increases in overall BOLD amplitude are not necessarily accompanied by improved processing of stimulus information. PMID:21632942

  18. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  19. Eccentricity mapping of the human visual cortex to evaluate temporal dynamics of functional T1ρ mapping.

    PubMed

    Heo, Hye-Young; Wemmie, John A; Johnson, Casey P; Thedens, Daniel R; Magnotta, Vincent A

    2015-07-01

    Recent experiments suggest that T1 relaxation in the rotating frame (T(1ρ)) is sensitive to metabolism and can detect localized activity-dependent changes in the human visual cortex. Current functional magnetic resonance imaging (fMRI) methods have poor temporal resolution due to delays in the hemodynamic response resulting from neurovascular coupling. Because T(1ρ) is sensitive to factors that can be derived from tissue metabolism, such as pH and glucose concentration via proton exchange, we hypothesized that activity-evoked T(1ρ) changes in visual cortex may occur before the hemodynamic response measured by blood oxygenation level-dependent (BOLD) and arterial spin labeling (ASL) contrast. To test this hypothesis, functional imaging was performed using T(1ρ), BOLD, and ASL in human participants viewing an expanding ring stimulus. We calculated eccentricity phase maps across the occipital cortex for each functional signal and compared the temporal dynamics of T(1ρ) versus BOLD and ASL. The results suggest that T(1ρ) changes precede changes in the two blood flow-dependent measures. These observations indicate that T(1ρ) detects a signal distinct from traditional fMRI contrast methods. In addition, these findings support previous evidence that T(1ρ) is sensitive to factors other than blood flow, volume, or oxygenation. Furthermore, they suggest that tissue metabolism may be driving activity-evoked T(1ρ) changes.

  20. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  1. The grammar of visual narrative: Neural evidence for constituent structure in sequential image comprehension.

    PubMed

    Cohn, Neil; Jackendoff, Ray; Holcomb, Phillip J; Kuperberg, Gina R

    2014-11-01

    Constituent structure has long been established as a central feature of human language. Analogous to how syntax organizes words in sentences, a narrative grammar organizes sequential images into hierarchic constituents. Here we show that the brain draws upon this constituent structure to comprehend wordless visual narratives. We recorded neural responses as participants viewed sequences of visual images (comics strips) in which blank images either disrupted individual narrative constituents or fell at natural constituent boundaries. A disruption of either the first or the second narrative constituent produced a left-lateralized anterior negativity effect between 500 and 700ms. Disruption of the second constituent also elicited a posteriorly-distributed positivity (P600) effect. These neural responses are similar to those associated with structural violations in language and music. These findings provide evidence that comprehenders use a narrative structure to comprehend visual sequences and that the brain engages similar neurocognitive mechanisms to build structure across multiple domains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. The grammar of visual narrative: Neural evidence for constituent structure in sequential image comprehension

    PubMed Central

    Cohn, Neil; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.

    2014-01-01

    Constituent structure has long been established as a central feature of human language. Analogous to how syntax organizes words in sentences, a narrative grammar organizes sequential images into hierarchic constituents. Here we show that the brain draws upon this constituent structure to comprehend wordless visual narratives. We recorded neural responses as participants viewed sequences of visual images (comics strips) in which blank images either disrupted individual narrative constituents or fell at natural constituent boundaries. A disruption of either the first or the second narrative constituent produced a left-lateralized anterior negativity effect between 500-700ms. Disruption of the second constituent also elicited a posteriorly-distributed positivity (P600) effect. These neural responses are similar to those associated with structural violations in language and music. These findings provide evidence that comprehenders use a narrative structure to comprehend visual sequences and that the brain engages similar neurocognitive mechanisms to build structure across multiple domains. PMID:25241329

  3. The touchscreen operant platform for assessing executive function in rats and mice

    PubMed Central

    Mar, Adam C.; Horner, Alexa E.; Nilsson, Simon R.O.; Alsiö, Johan; Kent, Brianne A.; Kim, Chi Hun; Holmes, Andrew; Saksida, Lisa M.; Bussey, Timothy J.

    2014-01-01

    Summary This protocol details a subset of assays developed within the touchscreen platform to measure aspects of executive function in rodents. Three main procedures are included: Extinction, measuring the rate and extent of curtailing a response that was previously, but is no longer, associated with reward; Reversal Learning, measuring the rate and extent of switching a response toward a visual stimulus that was previously not, but has become, associated with reward (and away from a visual stimulus that was previously, but is no longer, rewarded); and the 5-Choice Serial Reaction Time (5-CSRT) task, gauging the ability to selectively detect and appropriately respond to briefly presented, spatially unpredictable visual stimuli. These methods were designed to assess both complimentary and overlapping constructs including selective and divided visual attention, inhibitory control, flexibility, impulsivity and compulsivity. The procedures comprise part of a wider touchscreen test battery assessing cognition in rodents with high potential for translation to human studies. PMID:24051960

  4. The Time Course of Segmentation and Cue-Selectivity in the Human Visual Cortex

    PubMed Central

    Appelbaum, Lawrence G.; Ales, Justin M.; Norcia, Anthony M.

    2012-01-01

    Texture discontinuities are a fundamental cue by which the visual system segments objects from their background. The neural mechanisms supporting texture-based segmentation are therefore critical to visual perception and cognition. In the present experiment we employ an EEG source-imaging approach in order to study the time course of texture-based segmentation in the human brain. Visual Evoked Potentials were recorded to four types of stimuli in which periodic temporal modulation of a central 3° figure region could either support figure-ground segmentation, or have identical local texture modulations but not produce changes in global image segmentation. The image discontinuities were defined either by orientation or phase differences across image regions. Evoked responses to these four stimuli were analyzed both at the scalp and on the cortical surface in retinotopic and functional regions-of-interest (ROIs) defined separately using fMRI on a subject-by-subject basis. Texture segmentation (tsVEP: segmenting versus non-segmenting) and cue-specific (csVEP: orientation versus phase) responses exhibited distinctive patterns of activity. Alternations between uniform and segmented images produced highly asymmetric responses that were larger after transitions from the uniform to the segmented state. Texture modulations that signaled the appearance of a figure evoked a pattern of increased activity starting at ∼143 ms that was larger in V1 and LOC ROIs, relative to identical modulations that didn't signal figure-ground segmentation. This segmentation-related activity occurred after an initial response phase that did not depend on the global segmentation structure of the image. The two cue types evoked similar tsVEPs up to 230 ms when they differed in the V4 and LOC ROIs. The evolution of the response proceeded largely in the feed-forward direction, with only weak evidence for feedback-related activity. PMID:22479566

  5. Evidence for auditory-visual processing specific to biological motion.

    PubMed

    Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F

    2012-01-01

    Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.

  6. "Visual" Cortex of Congenitally Blind Adults Responds to Syntactic Movement.

    PubMed

    Lane, Connor; Kanjlia, Shipra; Omaki, Akira; Bedny, Marina

    2015-09-16

    Human cortex is comprised of specialized networks that support functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity offer unique insights into this question. In congenitally blind individuals, "visual" cortex responds to auditory and tactile stimuli. Remarkably, recent evidence suggests that occipital areas participate in language processing. We asked whether in blindness, occipital cortices: (1) develop domain-specific responses to language and (2) respond to a highly specialized aspect of language-syntactic movement. Nineteen congenitally blind and 18 sighted participants took part in two fMRI experiments. We report that in congenitally blind individuals, but not in sighted controls, "visual" cortex is more active during sentence comprehension than during a sequence memory task with nonwords, or a symbolic math task. This suggests that areas of occipital cortex become selective for language, relative to other similar higher-cognitive tasks. Crucially, we find that these occipital areas respond more to sentences with syntactic movement but do not respond to the difficulty of math equations. We conclude that regions within the visual cortex of blind adults are involved in syntactic processing. Our findings suggest that the cognitive function of human cortical areas is largely determined by input during development. Human cortex is made up of specialized regions that perform different functions, such as visual motion perception and language processing. How do genes and experience contribute to this specialization? Studies of plasticity show that cortical areas can change function from one sensory modality to another. Here we demonstrate that input during development can alter cortical function even more dramatically. In blindness a subset of "visual" areas becomes specialized for language processing. Crucially, we find that the same "visual" areas respond to a highly specialized and uniquely human aspect of language-syntactic movement. These data suggest that human cortex has broad functional capacity during development, and input plays a major role in determining functional specialization. Copyright © 2015 the authors 0270-6474/15/3512859-10$15.00/0.

  7. Enhanced Visual Cortical Activation for Emotional Stimuli is Preserved in Patients with Unilateral Amygdala Resection

    PubMed Central

    Edmiston, E. Kale; McHugo, Maureen; Dukic, Mildred S.; Smith, Stephen D.; Abou-Khalil, Bassel; Eggers, Erica

    2013-01-01

    Emotionally arousing pictures induce increased activation of visual pathways relative to emotionally neutral images. A predominant model for the preferential processing and attention to emotional stimuli posits that the amygdala modulates sensory pathways through its projections to visual cortices. However, recent behavioral studies have found intact perceptual facilitation of emotional stimuli in individuals with amygdala damage. To determine the importance of the amygdala to modulations in visual processing, we used functional magnetic resonance imaging to examine visual cortical blood oxygenation level-dependent (BOLD) signal in response to emotionally salient and neutral images in a sample of human patients with unilateral medial temporal lobe resection that included the amygdala. Adults with right (n = 13) or left (n = 5) medial temporal lobe resections were compared with demographically matched healthy control participants (n = 16). In the control participants, both aversive and erotic images produced robust BOLD signal increases in bilateral primary and secondary visual cortices relative to neutral images. Similarly, all patients with amygdala resections showed enhanced visual cortical activations to erotic images both ipsilateral and contralateral to the lesion site. All but one of the amygdala resection patients showed similar enhancements to aversive stimuli and there were no significant group differences in visual cortex BOLD responses in patients compared with controls for either aversive or erotic images. Our results indicate that neither the right nor left amygdala is necessary for the heightened visual cortex BOLD responses observed during emotional stimulus presentation. These data challenge an amygdalo-centric model of emotional modulation and suggest that non-amygdalar processes contribute to the emotional modulation of sensory pathways. PMID:23825407

  8. Effects of spatial frequency and location of fearful faces on human amygdala activity.

    PubMed

    Morawetz, Carmen; Baudewig, Juergen; Treue, Stefan; Dechent, Peter

    2011-01-31

    Facial emotion perception plays a fundamental role in interpersonal social interactions. Images of faces contain visual information at various spatial frequencies. The amygdala has previously been reported to be preferentially responsive to low-spatial frequency (LSF) rather than to high-spatial frequency (HSF) filtered images of faces presented at the center of the visual field. Furthermore, it has been proposed that the amygdala might be especially sensitive to affective stimuli in the periphery. In the present study we investigated the impact of spatial frequency and stimulus eccentricity on face processing in the human amygdala and fusiform gyrus using functional magnetic resonance imaging (fMRI). The spatial frequencies of pictures of fearful faces were filtered to produce images that retained only LSF or HSF information. Facial images were presented either in the left or right visual field at two different eccentricities. In contrast to previous findings, we found that the amygdala responds to LSF and HSF stimuli in a similar manner regardless of the location of the affective stimuli in the visual field. Furthermore, the fusiform gyrus did not show differential responses to spatial frequency filtered images of faces. Our findings argue against the view that LSF information plays a crucial role in the processing of facial expressions in the amygdala and of a higher sensitivity to affective stimuli in the periphery. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Prospects for Quantitative fMRI: Investigating the Effects of Caffeine on Baseline Oxygen Metabolism and the Response to a Visual Stimulus in Humans

    PubMed Central

    Griffeth, Valerie E.M.; Perthen, Joanna E.; Buxton, Richard B.

    2011-01-01

    Functional magnetic resonance imaging (fMRI) provides an indirect reflection of neural activity change in the working brain through detection of blood oxygenation level dependent (BOLD) signal changes. Although widely used to map patterns of brain activation, fMRI has not yet met its potential for clinical and pharmacological studies due to difficulties in quantitatively interpreting the BOLD signal. This difficulty is due to the BOLD response being strongly modulated by two physiological factors in addition to the level of neural activity: the amount of deoxyhemoglobin present in the baseline state and the coupling ratio, n, of evoked changes in blood flow and oxygen metabolism. In this study, we used a quantitative fMRI approach with dual measurement of blood flow and BOLD responses to overcome these limitations and show that these two sources of modulation work in opposite directions following caffeine administration in healthy human subjects. A strong 27% reduction in baseline blood flow and a 22% increase in baseline oxygen metabolism after caffeine consumption led to a decrease in baseline blood oxygenation and was expected to increase the subsequent BOLD response to the visual stimulus. Opposing this, caffeine reduced n through a strong 61% increase in the evoked oxygen metabolism response to the visual stimulus. The combined effect was that BOLD responses pre- and post-caffeine were similar despite large underlying physiological changes, indicating that the magnitude of the BOLD response alone should not be interpreted as a direct measure of underlying neurophysiological changes. Instead, a quantitative methodology based on dual-echo measurement of blood flow and BOLD responses is a promising tool for applying fMRI to disease and drug studies in which both baseline conditions and the coupling of blood flow and oxygen metabolism responses to a stimulus may be altered. PMID:21586328

  10. Dissociation of neural mechanisms underlying orientation processing in humans

    PubMed Central

    Ling, Sam; Pearson, Joel; Blake, Randolph

    2009-01-01

    Summary Orientation selectivity is a fundamental, emergent property of neurons in early visual cortex, and discovery of that property [1, 2] dramatically shaped how we conceptualize visual processing [3–6]. However, much remains unknown about the neural substrates of these basic building blocks of perception, and what is known primarily stems from animal physiology studies. To probe the neural concomitants of orientation processing in humans, we employed repetitive transcranial magnetic stimulation (rTMS) to attenuate neural responses evoked by stimuli presented within a local region of the visual field. Previous physiological studies have shown that rTMS can significantly suppress the neuronal spiking activity, hemodynamic responses, and local field potentials within a focused cortical region [7, 8]. By suppressing neural activity with rTMS, we were able to dissociate components of the neural circuitry underlying two distinct aspects of orientation processing: selectivity and contextual effects. Orientation selectivity gauged by masking was unchanged by rTMS, whereas an otherwise robust orientation repulsion illusion was weakened following rTMS. This dissociation implies that orientation processing relies on distinct mechanisms, only one of which was impacted by rTMS. These results are consistent with models positing that orientation selectivity is largely governed by the patterns of convergence of thalamic afferents onto cortical neurons, with intracortical activity then shaping population responses contained within those orientation-selective cortical neurons. PMID:19682905

  11. Dim light adaptation attenuates acute melatonin suppression in humans.

    PubMed

    Jasser, Samar A; Hanifin, John P; Rollag, Mark D; Brainard, George C

    2006-10-01

    Abstract Studies in rodents with retinal degeneration indicated that neither the rod nor the cone photoreceptors obligatorily participate in circadian responses to light, including melatonin suppression and photoperiodic response. Yet there is a residual phase-shifting response in melanopsin knockout mice, which suggests an alternate or redundant means for light input to the SCN of the hypothalamus. The findings of Aggelopoulos and Meissl suggest a complex, dynamic interrelationship between the classic visual photoreceptors and SCN cell sensitivity to light stimuli, relative to various adaptive lighting conditions. These studies raised the possibility that the phototransductive physiology of the retinohypothalamic tract in humans might be modulated by the visual rod and cone photoreceptors. The aim of the following two-part study was to test the hypothesis that dim light adaptation will dampen the subsequent suppression of melatonin by monochromatic light in healthy human subjects. Each experiment included 5 female and 3 male human subjects between the ages of 18 and 30 years, with normal color vision. Dim white light and darkness adaptation exposures occurred between midnight and 0200 h, and a full-field 460-nm light exposure subsequently occurred between 0200 and 0330-h for each adaptation condition, at 2 different intensities. Plasma samples were drawn following the 2-h adaptation, as well as after the 460-nm monochromatic light exposure, and melatonin was measured by radioimmunoassay. Comparison of melatonin suppression responses to monochromatic light in both studies revealed a loss of significant suppression after dim white light adaptation compared with dark adaptation (p < 0.04 and p < 0.01). These findings indicate that the activity of the novel circadian photoreceptive system in humans is subject to subthreshold modulation of its sensitivity to subsequent monochromatic light exposure, varying with the conditions of light adaptation prior to exposure.

  12. A non-invasive method for studying an index of pupil diameter and visual performance in the rhesus monkey.

    PubMed

    Fairhall, Sarah J; Dickson, Carol A; Scott, Leah; Pearce, Peter C

    2006-04-01

    A non-invasive model has been developed to estimate gaze direction and relative pupil diameter, in minimally restrained rhesus monkeys, to investigate the effects of low doses of ocularly administered cholinergic compounds on visual performance. Animals were trained to co-operate with a novel device, which enabled eye movements to be recorded using modified human eye-tracking equipment, and to perform a task which determined visual threshold contrast. Responses were made by gaze transfer under twilight conditions. 4% w/v pilocarpine nitrate was studied to demonstrate the suitability of the model. Pilocarpine induced marked miosis for >3 h which was accompanied by a decrement in task performance. The method obviates the need for invasive surgery and, as the position of point of gaze can be approximately defined, the approach may have utility in other areas of research involving non-human primates.

  13. Layer-Specific fMRI Reflects Different Neuronal Computations at Different Depths in Human V1

    PubMed Central

    Olman, Cheryl A.; Harel, Noam; Feinberg, David A.; He, Sheng; Zhang, Peng; Ugurbil, Kamil; Yacoub, Essa

    2012-01-01

    Recent work has established that cerebral blood flow is regulated at a spatial scale that can be resolved by high field fMRI to show cortical columns in humans. While cortical columns represent a cluster of neurons with similar response properties (spanning from the pial surface to the white matter), important information regarding neuronal interactions and computational processes is also contained within a single column, distributed across the six cortical lamina. A basic understanding of underlying neuronal circuitry or computations may be revealed through investigations of the distribution of neural responses at different cortical depths. In this study, we used T2-weighted imaging with 0.7 mm (isotropic) resolution to measure fMRI responses at different depths in the gray matter while human subjects observed images with either recognizable or scrambled (physically impossible) objects. Intact and scrambled images were partially occluded, resulting in clusters of activity distributed across primary visual cortex. A subset of the identified clusters of voxels showed a preference for scrambled objects over intact; in these clusters, the fMRI response in middle layers was stronger during the presentation of scrambled objects than during the presentation of intact objects. A second experiment, using stimuli targeted at either the magnocellular or the parvocellular visual pathway, shows that laminar profiles in response to parvocellular-targeted stimuli peak in more superficial layers. These findings provide new evidence for the differential sensitivity of high-field fMRI to modulations of the neural responses at different cortical depths. PMID:22448223

  14. An Efficient and Versatile System for Visualization and Genetic Modification of Dopaminergic Neurons in Transgenic Mice

    PubMed Central

    Kramer, Edgar R.

    2015-01-01

    Background & Aims The brain dopaminergic (DA) system is involved in fine tuning many behaviors and several human diseases are associated with pathological alterations of the DA system such as Parkinson’s disease (PD) and drug addiction. Because of its complex network integration, detailed analyses of physiological and pathophysiological conditions are only possible in a whole organism with a sophisticated tool box for visualization and functional modification. Methods & Results Here, we have generated transgenic mice expressing the tetracycline-regulated transactivator (tTA) or the reverse tetracycline-regulated transactivator (rtTA) under control of the tyrosine hydroxylase (TH) promoter, TH-tTA (tet-OFF) and TH-rtTA (tet-ON) mice, to visualize and genetically modify DA neurons. We show their tight regulation and efficient use to overexpress proteins under the control of tet-responsive elements or to delete genes of interest with tet-responsive Cre. In combination with mice encoding tet-responsive luciferase, we visualized the DA system in living mice progressively over time. Conclusion These experiments establish TH-tTA and TH-rtTA mice as a powerful tool to generate and monitor mouse models for DA system diseases. PMID:26291828

  15. Adaptation and perceptual norms

    NASA Astrophysics Data System (ADS)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  16. Decoding and reconstructing color from responses in human visual cortex.

    PubMed

    Brouwer, Gijs Joost; Heeger, David J

    2009-11-04

    How is color represented by spatially distributed patterns of activity in visual cortex? Functional magnetic resonance imaging responses to several stimulus colors were analyzed with multivariate techniques: conventional pattern classification, a forward model of idealized color tuning, and principal component analysis (PCA). Stimulus color was accurately decoded from activity in V1, V2, V3, V4, and VO1 but not LO1, LO2, V3A/B, or MT+. The conventional classifier and forward model yielded similar accuracies, but the forward model (unlike the classifier) also reliably reconstructed novel stimulus colors not used to train (specify parameters of) the model. The mean responses, averaged across voxels in each visual area, were not reliably distinguishable for the different stimulus colors. Hence, each stimulus color was associated with a unique spatially distributed pattern of activity, presumably reflecting the color selectivity of cortical neurons. Using PCA, a color space was derived from the covariation, across voxels, in the responses to different colors. In V4 and VO1, the first two principal component scores (main source of variation) of the responses revealed a progression through perceptual color space, with perceptually similar colors evoking the most similar responses. This was not the case for any of the other visual cortical areas, including V1, although decoding was most accurate in V1. This dissociation implies a transformation from the color representation in V1 to reflect perceptual color space in V4 and VO1.

  17. Simultaneous EEG/fMRI analysis of the resonance phenomena in steady-state visual evoked responses.

    PubMed

    Bayram, Ali; Bayraktaroglu, Zubeyir; Karahan, Esin; Erdogan, Basri; Bilgic, Basar; Ozker, Muge; Kasikci, Itir; Duru, Adil D; Ademoglu, Ahmet; Oztürk, Cengizhan; Arikan, Kemal; Tarhan, Nevzat; Demiralp, Tamer

    2011-04-01

    The stability of the steady-state visual evoked potentials (SSVEPs) across trials and subjects makes them a suitable tool for the investigation of the visual system. The reproducible pattern of the frequency characteristics of SSVEPs shows a global amplitude maximum around 10 Hz and additional local maxima around 20 and 40 Hz, which have been argued to represent resonant behavior of damped neuronal oscillators. Simultaneous electroencephalogram/functional magnetic resonance imaging (EEG/fMRI) measurement allows testing of the resonance hypothesis about the frequency-selective increases in SSVEP amplitudes in human subjects, because the total synaptic activity that is represented in the fMRI-Blood Oxygen Level Dependent (fMRI-BOLD) response would not increase but get synchronized at the resonance frequency. For this purpose, 40 healthy volunteers were visually stimulated with flickering light at systematically varying frequencies between 6 and 46 Hz, and the correlations between SSVEP amplitudes and the BOLD responses were computed. The SSVEP frequency characteristics of all subjects showed 3 frequency ranges with an amplitude maximum in each of them, which roughly correspond to alpha, beta and gamma bands of the EEG. The correlation maps between BOLD responses and SSVEP amplitude changes across the different stimulation frequencies within each frequency band showed no significant correlation in the alpha range, while significant correlations were obtained in the primary visual area for the beta and gamma bands. This non-linear relationship between the surface recorded SSVEP amplitudes and the BOLD responses of the visual cortex at stimulation frequencies around the alpha band supports the view that a resonance at the tuning frequency of the thalamo-cortical alpha oscillator in the visual system is responsible for the global amplitude maximum of the SSVEP around 10 Hz. Information gained from the SSVEP/fMRI analyses in the present study might be extrapolated to the EEG/fMRI analysis of the transient event-related potentials (ERPs) in terms of expecting more reliable and consistent correlations between EEG and fMRI responses, when the analyses are carried out on evoked or induced oscillations (spectral perturbations) in separate frequency bands instead of the time-domain ERP peaks.

  18. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    PubMed

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  19. The DTIC Review. Volume 5, Number 3. Cybernetics: Enhancing Human Performance

    DTIC Science & Technology

    2001-03-01

    Human Factors Engineering 16. SECURITY CLASSIFICATION OF: 17. LIMITATION 18. NUMBER 19a. NAME OF RESPONSIBLE PERSON OF ABSTRACT OF PAGES Phyllis...2 AD Number: A382305 Corporate Author: Arizona University - Tucson Department of Electrical and Computer Engineering Tucson, AZ...Visualization Aids AD-A382305 Aug 2000 Arizona University - Tucson Department of Electrical and Computer Engineering Tucson, AZ 2 THIS PAGE INTENTIONALLY

  20. Extending human perception of electromagnetic radiation to the UV region through biologically inspired photochromic fuzzy logic (BIPFUL) systems.

    PubMed

    Gentili, Pier Luigi; Rightler, Amanda L; Heron, B Mark; Gabbutt, Christopher D

    2016-01-25

    Photochromic fuzzy logic systems have been designed that extend human visual perception into the UV region. The systems are founded on a detailed knowledge of the activation wavelengths and quantum yields of a series of thermally reversible photochromic compounds. By appropriate matching of the photochromic behaviour unique colour signatures are generated in response differing UV activation frequencies.

  1. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    PubMed Central

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  2. Behold the voice of wrath: cross-modal modulation of visual attention by anger prosody.

    PubMed

    Brosch, Tobias; Grandjean, Didier; Sander, David; Scherer, Klaus R

    2008-03-01

    Emotionally relevant stimuli are prioritized in human information processing. It has repeatedly been shown that selective spatial attention is modulated by the emotional content of a stimulus. Until now, studies investigating this phenomenon have only examined within-modality effects, most frequently using pictures of emotional stimuli to modulate visual attention. In this study, we used simultaneously presented utterances with emotional and neutral prosody as cues for a visually presented target in a cross-modal dot probe task. Response times towards targets were faster when they appeared at the location of the source of the emotional prosody. Our results show for the first time a cross-modal attentional modulation of visual attention by auditory affective prosody.

  3. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes

    PubMed Central

    Galeazzi, Juan M.; Navajas, Joaquín; Mender, Bedeho M. W.; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M.

    2016-01-01

    ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views. PMID:27253452

  4. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes.

    PubMed

    Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M

    2016-01-01

    Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.

  5. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  6. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  7. Artificial retina model for the retinally blind based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zeng, Yan-an; Song, Xin-qiang; Jiang, Fa-gang; Chang, Da-ding

    2007-01-01

    Artificial retina is aimed for the stimulation of remained retinal neurons in the patients with degenerated photoreceptors. Microelectrode arrays have been developed for this as a part of stimulator. Design such microelectrode arrays first requires a suitable mathematical method for human retinal information processing. In this paper, a flexible and adjustable human visual information extracting model is presented, which is based on the wavelet transform. With the flexible of wavelet transform to image information processing and the consistent to human visual information extracting, wavelet transform theory is applied to the artificial retina model for the retinally blind. The response of the model to synthetic image is shown. The simulated experiment demonstrates that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an artificial retina.

  8. Understanding face perception by means of human electrophysiology.

    PubMed

    Rossion, Bruno

    2014-06-01

    Electrophysiological recordings on the human scalp provide a wealth of information about the temporal dynamics and nature of face perception at a global level of brain organization. The time window between 100 and 200 ms witnesses the transition between low-level and high-level vision, an N170 component correlating with conscious interpretation of a visual stimulus as a face. This face representation is rapidly refined as information accumulates during this time window, allowing the individualization of faces. To improve the sensitivity and objectivity of face perception measures, it is increasingly important to go beyond transient visual stimulation by recording electrophysiological responses at periodic frequency rates. This approach has recently provided face perception thresholds and the first objective signature of integration of facial parts in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Selective attention reduces physiological noise in the external ear canals of humans. II: Visual attention

    PubMed Central

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070

  10. In Vivo Visualization of Alzheimer’s Amyloid Plaques by MRI in Transgenic Mice Without a Contrast Agent

    PubMed Central

    Jack, Clifford R.; Garwood, Michael; Wengenack, Thomas M.; Borowski, Bret; Curran, Geoffrey L.; Lin, Joseph; Adriany, Gregor; Grohn, Olli H.J.; Grimm, Roger; Poduslo, Joseph F.

    2009-01-01

    One of the cardinal pathologic features of Alzheimer’s disease (AD) is formation of senile, or amyloid, plaques. Transgenic mice have been developed that express one or more of the genes responsible for familial AD in humans. Doubly transgenic mice develop “human-like” plaques, providing a mechanism to study amyloid plaque biology in a controlled manner. Imaging of labeled plaques has been accomplished with other modalities, but only MRI has sufficient spatial and contrast resolution to visualize individual plaques non-invasively. Methods to optimize visualization of plaques in vivo in transgenic mice at 9.4 T using a spin echo sequence based on adiabatic pulses are described. Preliminary results indicate that a spin echo acquisition more accurately reflects plaque size, while a T2* weighted gradient echo sequence reflects plaque iron content not plaque size. In vivo MRI – ex vivo MRI – in vitro histological correlations are provided. Histologically verified plaques as small as 50 μm in diameter were visualized in the living animal. To our knowledge this work represents the first demonstration of non-invasive in vivo visualization of individual AD plaques without the use of a contrast agent. PMID:15562496

  11. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    NASA Astrophysics Data System (ADS)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  12. Predictive and tempo-flexible synchronization to a visual metronome in monkeys.

    PubMed

    Takeya, Ryuji; Kameda, Masashi; Patel, Aniruddh D; Tanaka, Masaki

    2017-07-21

    Predictive and tempo-flexible synchronization to an auditory beat is a fundamental component of human music. To date, only certain vocal learning species show this behaviour spontaneously. Prior research training macaques (vocal non-learners) to tap to an auditory or visual metronome found their movements to be largely reactive, not predictive. Does this reflect the lack of capacity for predictive synchronization in monkeys, or lack of motivation to exhibit this behaviour? To discriminate these possibilities, we trained monkeys to make synchronized eye movements to a visual metronome. We found that monkeys could generate predictive saccades synchronized to periodic visual stimuli when an immediate reward was given for every predictive movement. This behaviour generalized to novel tempi, and the monkeys could maintain the tempo internally. Furthermore, monkeys could flexibly switch from predictive to reactive saccades when a reward was given for each reactive response. In contrast, when humans were asked to make a sequence of reactive saccades to a visual metronome, they often unintentionally generated predictive movements. These results suggest that even vocal non-learners may have the capacity for predictive and tempo-flexible synchronization to a beat, but that only certain vocal learning species are intrinsically motivated to do it.

  13. Noisy Spiking in Visual Area V2 of Amblyopic Monkeys.

    PubMed

    Wang, Ye; Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Smith, Earl L; Chino, Yuzo M

    2017-01-25

    Interocular decorrelation of input signals in developing visual cortex can cause impaired binocular vision and amblyopia. Although increased intrinsic noise is thought to be responsible for a range of perceptual deficits in amblyopic humans, the neural basis for the elevated perceptual noise in amblyopic primates is not known. Here, we tested the idea that perceptual noise is linked to the neuronal spiking noise (variability) resulting from developmental alterations in cortical circuitry. To assess spiking noise, we analyzed the contrast-dependent dynamics of spike counts and spiking irregularity by calculating the square of the coefficient of variation in interspike intervals (CV 2 ) and the trial-to-trial fluctuations in spiking, or mean matched Fano factor (m-FF) in visual area V2 of monkeys reared with chronic monocular defocus. In amblyopic neurons, the contrast versus response functions and the spike count dynamics exhibited significant deviations from comparable data for normal monkeys. The CV 2 was pronounced in amblyopic neurons for high-contrast stimuli and the m-FF was abnormally high in amblyopic neurons for low-contrast gratings. The spike count, CV 2 , and m-FF of spontaneous activity were also elevated in amblyopic neurons. These contrast-dependent spiking irregularities were correlated with the level of binocular suppression in these V2 neurons and with the severity of perceptual loss for individual monkeys. Our results suggest that the developmental alterations in normalization mechanisms resulting from early binocular suppression can explain much of these contrast-dependent spiking abnormalities in V2 neurons and the perceptual performance of our amblyopic monkeys. Amblyopia is a common developmental vision disorder in humans. Despite the extensive animal studies on how amblyopia emerges, we know surprisingly little about the neural basis of amblyopia in humans and nonhuman primates. Although the vision of amblyopic humans is often described as being noisy by perceptual and modeling studies, the exact nature or origin of this elevated perceptual noise is not known. We show that elevated and noisy spontaneous activity and contrast-dependent noisy spiking (spiking irregularity and trial-to-trial fluctuations in spiking) in neurons of visual area V2 could limit the visual performance of amblyopic primates. Moreover, we discovered that the noisy spiking is linked to a high level of binocular suppression in visual cortex during development. Copyright © 2017 the authors 0270-6474/17/370922-14$15.00/0.

  14. Effects of light on brain and behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brainard, G.C.

    1994-12-31

    It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye - in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiologicalmore » phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.« less

  15. Effects of light on brain and behavior

    NASA Technical Reports Server (NTRS)

    Brainard, George C.

    1994-01-01

    It is obvious that light entering the eye permits the sensory capacity of vision. The human species is highly dependent on visual perception of the environment and consequently, the scientific study of vision and visual mechanisms is a centuries old endeavor. Relatively new discoveries are now leading to an expanded understanding of the role of light entering the eye in addition to supporting vision, light has various nonvisual biological effects. Over the past thirty years, animal studies have shown that environmental light is the primary stimulus for regulating circadian rhythms, seasonal cycles, and neuroendocrine responses. As with all photobiological phenomena, the wavelength, intensity, timing and duration of a light stimulus is important in determining its regulatory influence on the circadian and neuroendocrine systems. Initially, the effects of light on rhythms and hormones were observed only in sub-human species. Research over the past decade, however, has confirmed that light entering the eyes of humans is a potent stimulus for controlling physiological rhythms. The aim of this paper is to examine three specific nonvisual responses in humans which are mediated by light entering the eye: light-induced melatonin suppression, light therapy for winter depression, and enhancement of nighttime performance. This will serve as a brief introduction to the growing database which demonstrates how light stimuli can influence physiology, mood and behavior in humans. Such information greatly expands our understanding of the human eye and will ultimately change our use of light in the human environment.

  16. Feature Integration in the Mapping of Multi-Attribute Visual Stimuli to Responses

    PubMed Central

    Ishizaki, Takuya; Morita, Hiromi; Morita, Masahiko

    2015-01-01

    In the human visual system, different attributes of an object, such as shape and color, are separately processed in different modules and then integrated to elicit a specific response. In this process, different attributes are thought to be temporarily “bound” together by focusing attention on the object; however, how such binding contributes to stimulus-response mapping remains unclear. Here we report that learning and performance of stimulus-response tasks was more difficult when three attributes of the stimulus determined the correct response than when two attributes did. We also found that spatially separated presentations of attributes considerably complicated the task, although they did not markedly affect target detection. These results are consistent with a paired-attribute model in which bound feature pairs, rather than object representations, are associated with responses by learning. This suggests that attention does not bind three or more attributes into a unitary object representation, and long-term learning is required for their integration. PMID:25762010

  17. Trade-off between curvature tuning and position invariance in visual area V4

    PubMed Central

    Sharpee, Tatyana O.; Kouh, Minjoon; Reynolds, John H.

    2013-01-01

    Humans can rapidly recognize a multitude of objects despite differences in their appearance. The neural mechanisms that endow high-level sensory neurons with both selectivity to complex stimulus features and “tolerance” or invariance to identity-preserving transformations, such as spatial translation, remain poorly understood. Previous studies have demonstrated that both tolerance and selectivity to conjunctions of features are increased at successive stages of the ventral visual stream that mediates visual recognition. Within a given area, such as visual area V4 or the inferotemporal cortex, tolerance has been found to be inversely related to the sparseness of neural responses, which in turn was positively correlated with conjunction selectivity. However, the direct relationship between tolerance and conjunction selectivity has been difficult to establish, with different studies reporting either an inverse or no significant relationship. To resolve this, we measured V4 responses to natural scenes, and using recently developed statistical techniques, we estimated both the relevant stimulus features and the range of translation invariance for each neuron. Focusing the analysis on tuning to curvature, a tractable example of conjunction selectivity, we found that neurons that were tuned to more curved contours had smaller ranges of position invariance and produced sparser responses to natural stimuli. These trade-offs provide empirical support for recent theories of how the visual system estimates 3D shapes from shading and texture flows, as well as the tiling hypothesis of the visual space for different curvature values. PMID:23798444

  18. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. Pupil responses to near visual demand during human visual development

    PubMed Central

    Bharadwaj, Shrikant R.; Wang, Jingyun; Candy, T. Rowan

    2014-01-01

    Pupil responses of adults to near visual demands are well characterized but those of typically developing infants and children are not. This study determined the following pupil characteristics of infants, children and adults using a PowerRefractor (25 Hz): i) binocular and monocular responses to a cartoon movie that ramped between 80 and 33 cm (20 infants, 20 2–4-yr-olds and 20 adults participated) ii) binocular and monocular response threshold for 0.1 Hz sinusoidal stimuli of 0.25 D, 0.5 D or 0.75 D amplitude (33 infants and 8 adults participated) iii) steady-state stability of pupil responses at 80 cms (8 infants and 8 adults participated). The change in pupil diameter with viewing distance (Δpd) was significantly smaller in infants and 2–4-yr-olds than in adults (p < 0.001) and significantly smaller under monocular than binocular conditions (p < 0.001). The 0.75 D sinusoidal stimulus elicited a significant binocular pupillary response in infants and a significant binocular and monocular pupillary response in adults. Steady-state pupillary fluctuations were similar in infants and adults (p = 0.25). The results suggest that the contribution of pupil size to changes in retinal image quality when tracking slow moving objects may be smaller during development than in adulthood. Smaller monocular Δpd reflects the importance of binocular cues in driving near-pupillary responses. PMID:21482712

  20. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  1. Presentation of a dummy representing suit for simulation of huMAN heatloss (DRESSMAN).

    PubMed

    Mayer, E; Schwab, R

    2004-09-01

    DRESSMAN designates a novel dummy for climate measurements that allows predicting the human thermal comfort experienced inside rooms (buildings, vehicles, aircraft, railway compartments etc.) on the basis of indoor climate measurements. Measurements can be listed in tabular form and can also be represented by way of color gradations in a virtual 3D human model. Optionally, visualization may be rendered during or after measurement. Due to its very quick response, DRESSMAN is particularly suited for nonstationary processes.

  2. Amygdala Response to Emotional Stimuli without Awareness: Facts and Interpretations

    PubMed Central

    Diano, Matteo; Celeghin, Alessia; Bagnis, Arianna; Tamietto, Marco

    2017-01-01

    Over the past two decades, evidence has accumulated that the human amygdala exerts some of its functions also when the observer is not aware of the content, or even presence, of the triggering emotional stimulus. Nevertheless, there is as of yet no consensus on the limits and conditions that affect the extent of amygdala’s response without focused attention or awareness. Here we review past and recent studies on this subject, examining neuroimaging literature on healthy participants as well as brain-damaged patients, and we comment on their strengths and limits. We propose a theoretical distinction between processes involved in attentional unawareness, wherein the stimulus is potentially accessible to enter visual awareness but fails to do so because attention is diverted, and in sensory unawareness, wherein the stimulus fails to enter awareness because its normal processing in the visual cortex is suppressed. We argue this distinction, along with data sampling amygdala responses with high temporal resolution, helps to appreciate the multiplicity of functional and anatomical mechanisms centered on the amygdala and supporting its role in non-conscious emotion processing. Separate, but interacting, networks relay visual information to the amygdala exploiting different computational properties of subcortical and cortical routes, thereby supporting amygdala functions at different stages of emotion processing. This view reconciles some apparent contradictions in the literature, as well as seemingly contrasting proposals, such as the dual stage and the dual route model. We conclude that evidence in favor of the amygdala response without awareness is solid, albeit this response originates from different functional mechanisms and is driven by more complex neural networks than commonly assumed. Acknowledging the complexity of such mechanisms can foster new insights on the varieties of amygdala functions without awareness and their impact on human behavior. PMID:28119645

  3. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention

    PubMed Central

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959

  4. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Benolken, M. S.

    1995-01-01

    The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  5. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (n = 88) and brain and visual cortex (n = 99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes and (iii) different visual cortical areas, independently of overall brain volume. In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices.

  6. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed Central

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    Background In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. Aim To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Subjects & Methods Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (N=88), and brain and visual cortex (N=99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. Results A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes, (iii) different visual cortical areas, independently of overall brain volume. Conclusion In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices. PMID:23879766

  7. Ocular-following responses to white noise stimuli in humans reveal a novel nonlinearity that results from temporal sampling

    PubMed Central

    Sheliga, Boris M.; Quaia, Christian; FitzGibbon, Edmond J.; Cumming, Bruce G.

    2016-01-01

    White noise stimuli are frequently used to study the visual processing of broadband images in the laboratory. A common goal is to describe how responses are derived from Fourier components in the image. We investigated this issue by recording the ocular-following responses (OFRs) to white noise stimuli in human subjects. For a given speed we compared OFRs to unfiltered white noise with those to noise filtered with band-pass filters and notch filters. Removing components with low spatial frequency (SF) reduced OFR magnitudes, and the SF associated with the greatest reduction matched the SF that produced the maximal response when presented alone. This reduction declined rapidly with SF, compatible with a winner-take-all operation. Removing higher SF components increased OFR magnitudes. For higher speeds this effect became larger and propagated toward lower SFs. All of these effects were quantitatively well described by a model that combined two factors: (a) an excitatory drive that reflected the OFRs to individual Fourier components and (b) a suppression by higher SF channels where the temporal sampling of the display led to flicker. This nonlinear interaction has an important practical implication: Even with high refresh rates (150 Hz), the temporal sampling introduced by visual displays has a significant impact on visual processing. For instance, we show that this distorts speed tuning curves, shifting the peak to lower speeds. Careful attention to spectral content, in the light of this nonlinearity, is necessary to minimize the resulting artifact when using white noise patterns undergoing apparent motion. PMID:26762277

  8. Signed language and human action processing: evidence for functional constraints on the human mirror-neuron system.

    PubMed

    Corina, David P; Knapp, Heather Patterson

    2008-12-01

    In the quest to further understand the neural underpinning of human communication, researchers have turned to studies of naturally occurring signed languages used in Deaf communities. The comparison of the commonalities and differences between spoken and signed languages provides an opportunity to determine core neural systems responsible for linguistic communication independent of the modality in which a language is expressed. The present article examines such studies, and in addition asks what we can learn about human languages by contrasting formal visual-gestural linguistic systems (signed languages) with more general human action perception. To understand visual language perception, it is important to distinguish the demands of general human motion processing from the highly task-dependent demands associated with extracting linguistic meaning from arbitrary, conventionalized gestures. This endeavor is particularly important because theorists have suggested close homologies between perception and production of actions and functions of human language and social communication. We review recent behavioral, functional imaging, and neuropsychological studies that explore dissociations between the processing of human actions and signed languages. These data suggest incomplete overlap between the mirror-neuron systems proposed to mediate human action and language.

  9. Retinotopy and attention to the face and house images in the human visual cortex.

    PubMed

    Wang, Bin; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Wu, Jinglong

    2016-06-01

    Attentional modulation of the neural activities in human visual areas has been well demonstrated. However, the retinotopic activities that are driven by face and house images and attention to face and house images remain unknown. In the present study, we used images of faces and houses to estimate the retinotopic activities that were driven by both the images and attention to the images, driven by attention to the images, and driven by the images. Generally, our results show that both face and house images produced similar retinotopic activities in visual areas, which were only observed in the attention + stimulus and the attention conditions, but not in the stimulus condition. The fusiform face area (FFA) responded to faces that were presented on the horizontal meridian, whereas parahippocampal place area (PPA) rarely responded to house at any visual field. We further analyzed the amplitudes of the neural responses to the target wedge. In V1, V2, V3, V3A, lateral occipital area 1 (LO-1), and hV4, the neural responses to the attended target wedge were significantly greater than those to the unattended target wedge. However, in LO-2, ventral occipital areas 1 and 2 (VO-1 and VO-2) and FFA and PPA, the differences were not significant. We proposed that these areas likely have large fields of attentional modulation for face and house images and exhibit responses to both the target wedge and the background stimuli. In addition, we proposed that the absence of retinotopic activity in the stimulus condition might imply no perceived difference between the target wedge and the background stimuli.

  10. Visual adaptation and face perception

    PubMed Central

    Webster, Michael A.; MacLeod, Donald I. A.

    2011-01-01

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces. PMID:21536555

  11. Visual adaptation and face perception.

    PubMed

    Webster, Michael A; MacLeod, Donald I A

    2011-06-12

    The appearance of faces can be strongly affected by the characteristics of faces viewed previously. These perceptual after-effects reflect processes of sensory adaptation that are found throughout the visual system, but which have been considered only relatively recently in the context of higher level perceptual judgements. In this review, we explore the consequences of adaptation for human face perception, and the implications of adaptation for understanding the neural-coding schemes underlying the visual representation of faces. The properties of face after-effects suggest that they, in part, reflect response changes at high and possibly face-specific levels of visual processing. Yet, the form of the after-effects and the norm-based codes that they point to show many parallels with the adaptations and functional organization that are thought to underlie the encoding of perceptual attributes like colour. The nature and basis for human colour vision have been studied extensively, and we draw on ideas and principles that have been developed to account for norms and normalization in colour vision to consider potential similarities and differences in the representation and adaptation of faces.

  12. Modulation of early cortical processing during divided attention to non-contiguous locations

    PubMed Central

    Frey, Hans-Peter; Schmid, Anita M.; Murphy, Jeremy W.; Molholm, Sophie; Lalor, Edmund C.; Foxe, John J.

    2015-01-01

    We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. While for several years the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed using high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classical pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing timeframes in hierarchically early visual regions and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. PMID:24606564

  13. Manipulating the content of dynamic natural scenes to characterize response in human MT/MST.

    PubMed

    Durant, Szonya; Wall, Matthew B; Zanker, Johannes M

    2011-09-09

    Optic flow is one of the most important sources of information for enabling human navigation through the world. A striking finding from single-cell studies in monkeys is the rapid saturation of response of MT/MST areas with the density of optic flow type motion information. These results are reflected psychophysically in human perception in the saturation of motion aftereffects. We began by comparing responses to natural optic flow scenes in human visual brain areas to responses to the same scenes with inverted contrast (photo negative). This changes scene familiarity while preserving local motion signals. This manipulation had no effect; however, the response was only correlated with the density of local motion (calculated by a motion correlation model) in V1, not in MT/MST. To further investigate this, we manipulated the visible proportion of natural dynamic scenes and found that areas MT and MST did not increase in response over a 16-fold increase in the amount of information presented, i.e., response had saturated. This makes sense in light of the sparseness of motion information in natural scenes, suggesting that the human brain is well adapted to exploit a small amount of dynamic signal and extract information important for survival.

  14. Primary Visual Cortex as a Saliency Map: A Parameter-Free Prediction and Its Test by Behavioral Data

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2015-01-01

    It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention. PMID:26441341

  15. Absence of visual experience modifies the neural basis of numerical thinking.

    PubMed

    Kanjlia, Shipra; Lane, Connor; Feigenson, Lisa; Bedny, Marina

    2016-10-04

    In humans, the ability to reason about mathematical quantities depends on a frontoparietal network that includes the intraparietal sulcus (IPS). How do nature and nurture give rise to the neurobiology of numerical cognition? We asked how visual experience shapes the neural basis of numerical thinking by studying numerical cognition in congenitally blind individuals. Blind (n = 17) and blindfolded sighted (n = 19) participants solved math equations that varied in difficulty (e.g., 27 - 12 = x vs. 7 - 2 = x), and performed a control sentence comprehension task while undergoing fMRI. Whole-cortex analyses revealed that in both blind and sighted participants, the IPS and dorsolateral prefrontal cortices were more active during the math task than the language task, and activity in the IPS increased parametrically with equation difficulty. Thus, the classic frontoparietal number network is preserved in the total absence of visual experience. However, surprisingly, blind but not sighted individuals additionally recruited a subset of early visual areas during symbolic math calculation. The functional profile of these "visual" regions was identical to that of the IPS in blind but not sighted individuals. Furthermore, in blindness, number-responsive visual cortices exhibited increased functional connectivity with prefrontal and IPS regions that process numbers. We conclude that the frontoparietal number network develops independently of visual experience. In blindness, this number network colonizes parts of deafferented visual cortex. These results suggest that human cortex is highly functionally flexible early in life, and point to frontoparietal input as a mechanism of cross-modal plasticity in blindness.

  16. Early Binocular Input Is Critical for Development of Audiovisual but Not Visuotactile Simultaneity Perception.

    PubMed

    Chen, Yi-Chuan; Lewis, Terri L; Shore, David I; Maurer, Daphne

    2017-02-20

    Temporal simultaneity provides an essential cue for integrating multisensory signals into a unified perception. Early visual deprivation, in both animals and humans, leads to abnormal neural responses to audiovisual signals in subcortical and cortical areas [1-5]. Behavioral deficits in integrating complex audiovisual stimuli in humans are also observed [6, 7]. It remains unclear whether early visual deprivation affects visuotactile perception similarly to audiovisual perception and whether the consequences for either pairing differ after monocular versus binocular deprivation [8-11]. Here, we evaluated the impact of early visual deprivation on the perception of simultaneity for audiovisual and visuotactile stimuli in humans. We tested patients born with dense cataracts in one or both eyes that blocked all patterned visual input until the cataractous lenses were removed and the affected eyes fitted with compensatory contact lenses (mean duration of deprivation = 4.4 months; range = 0.3-28.8 months). Both monocularly and binocularly deprived patients demonstrated lower precision in judging audiovisual simultaneity. However, qualitatively different outcomes were observed for the two patient groups: the performance of monocularly deprived patients matched that of young children at immature stages, whereas that of binocularly deprived patients did not match any stage in typical development. Surprisingly, patients performed normally in judging visuotactile simultaneity after either monocular or binocular deprivation. Therefore, early binocular input is necessary to develop normal neural substrates for simultaneity perception of visual and auditory events but not visual and tactile events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Research on metallic material defect detection based on bionic sensing of human visual properties

    NASA Astrophysics Data System (ADS)

    Zhang, Pei Jiang; Cheng, Tao

    2018-05-01

    Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.

  18. The human visual cortex responds to gene therapy–mediated recovery of retinal function

    PubMed Central

    Ashtari, Manzar; Cyckowski, Laura L.; Monroe, Justin F.; Marshall, Kathleen A.; Chung, Daniel C.; Auricchio, Alberto; Simonelli, Francesca; Leroy, Bart P.; Maguire, Albert M.; Shindler, Kenneth S.; Bennett, Jean

    2011-01-01

    Leber congenital amaurosis (LCA) is a rare degenerative eye disease, linked to mutations in at least 14 genes. A recent gene therapy trial in patients with LCA2, who have mutations in RPE65, demonstrated that subretinal injection of an adeno-associated virus (AAV) carrying the normal cDNA of that gene (AAV2-hRPE65v2) could markedly improve vision. However, it remains unclear how the visual cortex responds to recovery of retinal function after prolonged sensory deprivation. Here, 3 of the gene therapy trial subjects, treated at ages 8, 9, and 35 years, underwent functional MRI within 2 years of unilateral injection of AAV2-hRPE65v2. All subjects showed increased cortical activation in response to high- and medium-contrast stimuli after exposure to the treated compared with the untreated eye. Furthermore, we observed a correlation between the visual field maps and the distribution of cortical activations for the treated eyes. These data suggest that despite severe and long-term visual impairment, treated LCA2 patients have intact and responsive visual pathways. In addition, these data suggest that gene therapy resulted in not only sustained and improved visual ability, but also enhanced contrast sensitivity. PMID:21606598

  19. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    PubMed Central

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  20. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  1. Segregation of Form, Color, Movement, and Depth: Anatomy, Physiology, and Perception

    NASA Astrophysics Data System (ADS)

    Livingstone, Margaret; Hubel, David

    1988-05-01

    Anatomical and physiological observations in monkeys indicate that the primate visual system consists of several separate and independent subdivisions that analyze different aspects of the same retinal image: cells in cortical visual areas 1 and 2 and higher visual areas are segregated into three interdigitating subdivisions that differ in their selectivity for color, stereopsis, movement, and orientation. The pathways selective for form and color seem to be derived mainly from the parvocellular geniculate subdivisions, the depth- and movement-selective components from the magnocellular. At lower levels, in the retina and in the geniculate, cells in these two subdivisions differ in their color selectivity, contrast sensitivity, temporal properties, and spatial resolution. These major differences in the properties of cells at lower levels in each of the subdivisions led to the prediction that different visual functions, such as color, depth, movement, and form perception, should exhibit corresponding differences. Human perceptual experiments are remarkably consistent with these predictions. Moreover, perceptual experiments can be designed to ask which subdivisions of the system are responsible for particular visual abilities, such as figure/ground discrimination or perception of depth from perspective or relative movement--functions that might be difficult to deduce from single-cell response properties.

  2. Olfactory discrimination: when vision matters?

    PubMed

    Demattè, M Luisa; Sanabria, Daniel; Spence, Charles

    2009-02-01

    Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.

  3. Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.

    PubMed

    Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno

    2015-05-01

    The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. How task demands shape brain responses to visual food cues.

    PubMed

    Pohl, Tanja Maria; Tempelmann, Claus; Noesselt, Toemme

    2017-06-01

    Several previous imaging studies have aimed at identifying the neural basis of visual food cue processing in humans. However, there is little consistency of the functional magnetic resonance imaging (fMRI) results across studies. Here, we tested the hypothesis that this variability across studies might - at least in part - be caused by the different tasks employed. In particular, we assessed directly the influence of task set on brain responses to food stimuli with fMRI using two tasks (colour vs. edibility judgement, between-subjects design). When participants judged colour, the left insula, the left inferior parietal lobule, occipital areas, the left orbitofrontal cortex and other frontal areas expressed enhanced fMRI responses to food relative to non-food pictures. However, when judging edibility, enhanced fMRI responses to food pictures were observed in the superior and middle frontal gyrus and in medial frontal areas including the pregenual anterior cingulate cortex and ventromedial prefrontal cortex. This pattern of results indicates that task sets can significantly alter the neural underpinnings of food cue processing. We propose that judging low-level visual stimulus characteristics - such as colour - triggers stimulus-related representations in the visual and even in gustatory cortex (insula), whereas discriminating abstract stimulus categories activates higher order representations in both the anterior cingulate and prefrontal cortex. Hum Brain Mapp 38:2897-2912, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Reward speeds up and increases consistency of visual selective attention: a lifespan comparison.

    PubMed

    Störmer, Viola; Eppinger, Ben; Li, Shu-Chen

    2014-06-01

    Children and older adults often show less favorable reward-based learning and decision making, relative to younger adults. It is unknown, however, whether reward-based processes that influence relatively early perceptual and attentional processes show similar lifespan differences. In this study, we investigated whether stimulus-reward associations affect selective visual attention differently across the human lifespan. Children, adolescents, younger adults, and older adults performed a visual search task in which the target colors were associated with either high or low monetary rewards. We discovered that high reward value speeded up response times across all four age groups, indicating that reward modulates attentional selection across the lifespan. This speed-up in response time was largest in younger adults, relative to the other three age groups. Furthermore, only younger adults benefited from high reward value in increasing response consistency (i.e., reduction of trial-by-trial reaction time variability). Our findings suggest that reward-based modulations of relatively early and implicit perceptual and attentional processes are operative across the lifespan, and the effects appear to be greater in adulthood. The age-specific effect of reward on reducing intraindividual response variability in younger adults likely reflects mechanisms underlying the development and aging of reward processing, such as lifespan age differences in the efficacy of dopaminergic modulation. Overall, the present results indicate that reward shapes visual perception across different age groups by biasing attention to motivationally salient events.

  6. Virally delivered Channelrhodopsin-2 Safely and Effectively Restores Visual Function in Multiple Mouse Models of Blindness

    PubMed Central

    Doroudchi, M Mehdi; Greenberg, Kenneth P; Liu, Jianwen; Silka, Kimberly A; Boyden, Edward S; Lockridge, Jennifer A; Arman, A Cyrus; Janani, Ramesh; Boye, Shannon E; Boye, Sanford L; Gordon, Gabriel M; Matteo, Benjamin C; Sampath, Alapakkam P; Hauswirth, William W; Horsager, Alan

    2011-01-01

    Previous work established retinal expression of channelrhodopsin-2 (ChR2), an algal cation channel gated by light, restored physiological and behavioral visual responses in otherwise blind rd1 mice. However, a viable ChR2-based human therapy must meet several key criteria: (i) ChR2 expression must be targeted, robust, and long-term, (ii) ChR2 must provide long-term and continuous therapeutic efficacy, and (iii) both viral vector delivery and ChR2 expression must be safe. Here, we demonstrate the development of a clinically relevant therapy for late stage retinal degeneration using ChR2. We achieved specific and stable expression of ChR2 in ON bipolar cells using a recombinant adeno-associated viral vector (rAAV) packaged in a tyrosine-mutated capsid. Targeted expression led to ChR2-driven electrophysiological ON responses in postsynaptic retinal ganglion cells and significant improvement in visually guided behavior for multiple models of blindness up to 10 months postinjection. Light levels to elicit visually guided behavioral responses were within the physiological range of cone photoreceptors. Finally, chronic ChR2 expression was nontoxic, with transgene biodistribution limited to the eye. No measurable immune or inflammatory response was observed following intraocular vector administration. Together, these data indicate that virally delivered ChR2 can provide a viable and efficacious clinical therapy for photoreceptor disease-related blindness. PMID:21505421

  7. Effects of reduced oxygen availability on the vascular response and oxygen consumption of the activated human visual cortex.

    PubMed

    Rodrigues Barreto, Felipe; Mangia, Silvia; Garrido Salmon, Carlos Ernesto

    2017-07-01

    To identify the impact of reduced oxygen availability on the evoked vascular response upon visual stimulation in the healthy human brain by magnetic resonance imaging (MRI). Functional MRI techniques based on arterial spin labeling (ASL), blood oxygenation level-dependent (BOLD), and vascular space occupancy (VASO)-dependent contrasts were utilized to quantify the BOLD signal, cerebral blood flow (CBF), and volume (CBV) from nine subjects at 3T (7M/2F, 27.3 ± 3.6 years old) during normoxia and mild hypoxia. Changes in visual stimulus-induced oxygen consumption rates were also estimated with mathematical modeling. Significant reductions in the extension of activated areas during mild hypoxia were observed in all three imaging contrasts: by 42.7 ± 25.2% for BOLD (n = 9, P = 0.002), 33.1 ± 24.0% for ASL (n = 9, P = 0.01), and 31.9 ± 15.6% for VASO images (n = 7, P = 0.02). Activated areas during mild hypoxia showed responses with similar amplitude for CBF (58.4 ± 18.7% hypoxia vs. 61.7 ± 16.1% normoxia, P = 0.61) and CBV (33.5 ± 17.5% vs. 25.2 ± 13.0%, P = 0.27), but not for BOLD (2.5 ± 0.8% vs. 4.1 ± 0.6%, P = 0.009). The estimated stimulus-induced increases of oxygen consumption were smaller during mild hypoxia as compared to normoxia (3.1 ± 5.0% vs. 15.5 ± 15.1%, P = 0.04). Our results demonstrate an altered vascular and metabolic response during mild hypoxia upon visual stimulation. 2 Technical Efficacy: Stage 2 J. MAGN. RESON. IMAGING 2017;46:142-149. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Human alteration of the rural landscape: Variations in visual perception

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cloquell-Ballester, Vicente-Agustin, E-mail: cloquell@dpi.upv.es; Carmen Torres-Sibille, Ana del; Cloquell-Ballester, Victor-Andres

    2012-01-15

    The objective of this investigation is to evaluate how visual perception varies as the rural landscape is altered by human interventions of varying character. An experiment is carried out using Semantic Differential Analysis to analyse the effect of the character and the type of the intervention on perception. Interventions are divided into elements of 'permanent industrial character', 'elements of permanent rural character' and 'elements of temporary character', and these categories are sub-divided into smaller groups according to the type of development. To increase the reliability of the results, the Intraclass Correlation Coefficient tool, is applied to validate the semantic spacemore » of the perceptual responses and to determine the number of subjects required for a reliable evaluation of the scenes.« less

  9. Preserved Haptic Shape Processing after Bilateral LOC Lesions.

    PubMed

    Snow, Jacqueline C; Goodale, Melvyn A; Culham, Jody C

    2015-10-07

    The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch. Copyright © 2015 the authors 0270-6474/15/3513745-16$15.00/0.

  10. Spatial attention improves the quality of population codes in human visual cortex.

    PubMed

    Saproo, Sameer; Serences, John T

    2010-08-01

    Selective attention enables sensory input from behaviorally relevant stimuli to be processed in greater detail, so that these stimuli can more accurately influence thoughts, actions, and future goals. Attention has been shown to modulate the spiking activity of single feature-selective neurons that encode basic stimulus properties (color, orientation, etc.). However, the combined output from many such neurons is required to form stable representations of relevant objects and little empirical work has formally investigated the relationship between attentional modulations on population responses and improvements in encoding precision. Here, we used functional MRI and voxel-based feature tuning functions to show that spatial attention induces a multiplicative scaling in orientation-selective population response profiles in early visual cortex. In turn, this multiplicative scaling correlates with an improvement in encoding precision, as evidenced by a concurrent increase in the mutual information between population responses and the orientation of attended stimuli. These data therefore demonstrate how multiplicative scaling of neural responses provides at least one mechanism by which spatial attention may improve the encoding precision of population codes. Increased encoding precision in early visual areas may then enhance the speed and accuracy of perceptual decisions computed by higher-order neural mechanisms.

  11. That is Cool: the Nature Of Aesthetics in Fluid Physics

    NASA Astrophysics Data System (ADS)

    Hertzberg, Jean

    2013-11-01

    Aesthetics has historically been defined as the study of beauty and thus as a metric of art. More recently, psychologists are using the term to describe a spectrum of responses from ``I hate it'' to ``I love it.'' In the context of fluid physics, what is beautiful? What elicits a ``Wow! Awesome! Cool!'' response versus a snore? Can we use aesthetics to deepen or change students' or the public's perceptions of physics and/or the world around them? For example, students seem to appreciate the aesthetics of destruction: environmental fluid dynamics such as storms, tornadoes, floods and wildfires are often responsible for massive destruction, yet humans draw pleasure from watching such physics and the attendant destruction from a safe distance. Can this voyeurism be turned to our advantage in communicating science? Observations of student and Facebook Flow Visualization group choices for fluid physics that draw a positive aesthetic response are sorted into empirical categories; the aesthetics of beauty, power, destruction, and oddness. Each aesthetic will be illustrated with examples drawn from flow visualizations from both the Flow Visualization course (MCEN 4151) taught at the University of Colorado, Boulder, and sources on the web. This work is supported by NSF: EEC 1240294.

  12. Automatic facial mimicry in response to dynamic emotional stimuli in five-month-old infants.

    PubMed

    Isomura, Tomoko; Nakano, Tamami

    2016-12-14

    Human adults automatically mimic others' emotional expressions, which is believed to contribute to sharing emotions with others. Although this behaviour appears fundamental to social reciprocity, little is known about its developmental process. Therefore, we examined whether infants show automatic facial mimicry in response to others' emotional expressions. Facial electromyographic activity over the corrugator supercilii (brow) and zygomaticus major (cheek) of four- to five-month-old infants was measured while they viewed dynamic clips presenting audiovisual, visual and auditory emotions. The audiovisual bimodal emotion stimuli were a display of a laughing/crying facial expression with an emotionally congruent vocalization, whereas the visual/auditory unimodal emotion stimuli displayed those emotional faces/vocalizations paired with a neutral vocalization/face, respectively. Increased activation of the corrugator supercilii muscle in response to audiovisual cries and the zygomaticus major in response to audiovisual laughter were observed between 500 and 1000 ms after stimulus onset, which clearly suggests rapid facial mimicry. By contrast, both visual and auditory unimodal emotion stimuli did not activate the infants' corresponding muscles. These results revealed that automatic facial mimicry is present as early as five months of age, when multimodal emotional information is present. © 2016 The Author(s).

  13. Shades of grey; Assessing the contribution of the magno- and parvocellular systems to neural processing of the retinal input in the human visual system from the influence of neural population size and its discharge activity on the VEP.

    PubMed

    Marcar, Valentine L; Baselgia, Silvana; Lüthi-Eisenegger, Barbara; Jäncke, Lutz

    2018-03-01

    Retinal input processing in the human visual system involves a phasic and tonic neural response. We investigated the role of the magno- and parvocellular systems by comparing the influence of the active neural population size and its discharge activity on the amplitude and latency of four VEP components. We recorded the scalp electric potential of 20 human volunteers viewing a series of dartboard images presented as a pattern reversing and pattern on-/offset stimulus. These patterns were designed to vary both neural population size coding the temporal- and spatial luminance contrast property and the discharge activity of the population involved in a systematic manner. When the VEP amplitude reflected the size of the neural population coding the temporal luminance contrast property of the image, the influence of luminance contrast followed the contrast response function of the parvocellular system. When the VEP amplitude reflected the size of the neural population responding to the spatial luminance contrast property the image, the influence of luminance contrast followed the contrast response function of the magnocellular system. The latencies of the VEP components examined exhibited the same behavior across our stimulus series. This investigation demonstrates the complex interplay of the magno- and parvocellular systems on the neural response as captured by the VEP. It also demonstrates a linear relationship between stimulus property, neural response, and the VEP and reveals the importance of feedback projections in modulating the ongoing neural response. In doing so, it corroborates the conclusions of our previous study.

  14. Dissociation between Neural Signatures of Stimulus and Choice in Population Activity of Human V1 during Perceptual Decision-Making

    PubMed Central

    Choe, Kyoung Whan; Blake, Randolph

    2014-01-01

    Primary visual cortex (V1) forms the initial cortical representation of objects and events in our visual environment, and it distributes information about that representation to higher cortical areas within the visual hierarchy. Decades of work have established tight linkages between neural activity occurring in V1 and features comprising the retinal image, but it remains debatable how that activity relates to perceptual decisions. An actively debated question is the extent to which V1 responses determine, on a trial-by-trial basis, perceptual choices made by observers. By inspecting the population activity of V1 from human observers engaged in a difficult visual discrimination task, we tested one essential prediction of the deterministic view: choice-related activity, if it exists in V1, and stimulus-related activity should occur in the same neural ensemble of neurons at the same time. Our findings do not support this prediction: while cortical activity signifying the variability in choice behavior was indeed found in V1, that activity was dissociated from activity representing stimulus differences relevant to the task, being advanced in time and carried by a different neural ensemble. The spatiotemporal dynamics of population responses suggest that short-term priors, perhaps formed in higher cortical areas involved in perceptual inference, act to modulate V1 activity prior to stimulus onset without modifying subsequent activity that actually represents stimulus features within V1. PMID:24523561

  15. Multivariate Patterns in the Human Object-Processing Pathway Reveal a Shift from Retinotopic to Shape Curvature Representations in Lateral Occipital Areas, LO-1 and LO-2.

    PubMed

    Vernon, Richard J W; Gouws, André D; Lawrence, Samuel J D; Wade, Alex R; Morland, Antony B

    2016-05-25

    Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to "functional" organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1-V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more "abstracted" representation, typically considered the preserve of "category-selective" extrastriate cortex, can nevertheless emerge in retinotopic regions. Visual areas are typically identified either through retinotopy (e.g., V1-V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. Copyright © 2016 Vernon et al.

  16. Multivariate Patterns in the Human Object-Processing Pathway Reveal a Shift from Retinotopic to Shape Curvature Representations in Lateral Occipital Areas, LO-1 and LO-2

    PubMed Central

    Vernon, Richard J. W.; Gouws, André D.; Lawrence, Samuel J. D.; Wade, Alex R.

    2016-01-01

    Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to “functional” organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1–V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more “abstracted” representation, typically considered the preserve of “category-selective” extrastriate cortex, can nevertheless emerge in retinotopic regions. SIGNIFICANCE STATEMENT Visual areas are typically identified either through retinotopy (e.g., V1–V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. PMID:27225766

  17. A Role for MST Neurons in Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland Scott; Perrone, J. A.; Wade, Charles E. (Technical Monitor)

    1994-01-01

    A template model of human visual self-motion perception (Perrone, JOSA, 1992; Perrone & Stone, Vis. Res., in press), which uses neurophysiologically realistic "heading detectors", is consistent with numerous human psychophysical results (Warren & Hannon, Nature, 1988; Stone & Perrone, Neuro. Abstr., 1991) including the failure of humans to estimate their heading (direction of forward translation) accurately under certain visual conditions (Royden et al., Nature, 1992). We tested the model detectors with stimuli used by others in- single-unit studies. The detectors showed emergent properties similar to those of MST neurons: 1) Sensitivity to non-preferred flow. Each detector is tuned to a specific combination of flow components and its response is systematically reduced by the addition of nonpreferred flow (Orban et al., PNAS, 1992), and 2) Position invariance. The detectors maintain their apparent preference for particular flow components over large regions of their receptive fields (e.g. Duffy & Wurtz, J. Neurophys., 1991; Graziano et al., J. Neurosci., 1994). It has been argued that this latter property is incompatible with MST playing a role in heading perception. The model however demonstrates how neurons with the above response properties could still support accurate heading estimation within extrastriate cortical maps.

  18. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans.

    PubMed

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-04-26

    Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.

  19. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans

    PubMed Central

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-01-01

    Converging reports indicate that face images are processed through specialized neural networks in the brain –i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches. PMID:27113635

  20. Adding words to the brain's visual dictionary: novel word learning selectively sharpens orthographic representations in the VWFA.

    PubMed

    Glezer, Laurie S; Kim, Judy; Rule, Josh; Jiang, Xiong; Riesenhuber, Maximilian

    2015-03-25

    The nature of orthographic representations in the human brain is still subject of much debate. Recent reports have claimed that the visual word form area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RWs). This theory predicts that learning novel words should selectively increase neural specificity for these words in the VWFA. We trained subjects to recognize novel pseudowords (PWs) and used fMRI rapid adaptation to compare neural selectivity with RWs, untrained PWs (UTPWs), and trained PWs (TPWs). Before training, PWs elicited broadly tuned responses, whereas responses to RWs indicated tight tuning. After training, TPW responses resembled those of RWs, whereas UTPWs continued to show broad tuning. This change in selectivity was specific to the VWFA. Therefore, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain's visual dictionary. Copyright © 2015 the authors 0270-6474/15/354965-08$15.00/0.

  1. Behavioural and physiological limits to vision in mammals

    PubMed Central

    Field, Greg D.

    2017-01-01

    Human vision is exquisitely sensitive—a dark-adapted observer is capable of reliably detecting the absorption of a few quanta of light. Such sensitivity requires that the sensory receptors of the retina, rod photoreceptors, generate a reliable signal when single photons are absorbed. In addition, the retina must be able to extract this information and relay it to higher visual centres under conditions where very few rods signal single-photon responses while the majority generate only noise. Critical to signal transmission are mechanistic optimizations within rods and their dedicated retinal circuits that enhance the discriminability of single-photon responses by mitigating photoreceptor and synaptic noise. We describe behavioural experiments over the past century that have led to the appreciation of high sensitivity near absolute visual threshold. We further consider mechanisms within rod photoreceptors and dedicated rod circuits that act to extract single-photon responses from cellular noise. We highlight how these studies have shaped our understanding of brain function and point out several unresolved questions in the processing of light near the visual threshold. This article is part of the themed issue ‘Vision in dim light’. PMID:28193817

  2. The malleability of emotional perception: Short-term plasticity in retinotopic neurons accompanies the formation of perceptual biases to threat.

    PubMed

    Thigpen, Nina N; Bartsch, Felix; Keil, Andreas

    2017-04-01

    Emotional experience changes visual perception, leading to the prioritization of sensory information associated with threats and opportunities. These emotional biases have been extensively studied by basic and clinical scientists, but their underlying mechanism is not known. The present study combined measures of brain-electric activity and autonomic physiology to establish how threat biases emerge in human observers. Participants viewed stimuli designed to differentially challenge known properties of different neuronal populations along the visual pathway: location, eye, and orientation specificity. Biases were induced using aversive conditioning with only 1 combination of eye, orientation, and location predicting a noxious loud noise and replicated in a separate group of participants. Selective heart rate-orienting responses for the conditioned threat stimulus indicated bias formation. Retinotopic visual brain responses were persistently and selectively enhanced after massive aversive learning for only the threat stimulus and dissipated after extinction training. These changes were location-, eye-, and orientation-specific, supporting the hypothesis that short-term plasticity in primary visual neurons mediates the formation of perceptual biases to threat. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. The neural response in short-term visual recognition memory for perceptual conjunctions.

    PubMed

    Elliott, R; Dolan, R J

    1998-01-01

    Short-term visual memory has been widely studied in humans and animals using delayed matching paradigms. The present study used positron emission tomography (PET) to determine the neural substrates of delayed matching to sample for complex abstract patterns over a 5-s delay. More specifically, the study assessed any differential neural response associated with remembering individual perceptual properties (color only and shape only) compared to conjunction between these properties. Significant activations associated with short-term visual memory (all memory conditions compared to perceptuomotor control) were observed in extrastriate cortex, medial and lateral parietal cortex, anterior cingulate, inferior frontal gyrus, and the thalamus. Significant deactivations were observed throughout the temporal cortex. Although the requirement to remember color compared to shape was associated with subtly different patterns of blood flow, the requirement to remember perceptual conjunctions between these features was not associated with additional specific activations. These data suggest that visual memory over a delay of the order of 5 s is mainly dependent on posterior perceptual regions of the cortex, with the exact regions depending on the perceptual aspect of the stimuli to be remembered.

  4. Dimensionality of visual complexity in computer graphics scenes

    NASA Astrophysics Data System (ADS)

    Ramanarayanan, Ganesh; Bala, Kavita; Ferwerda, James A.; Walter, Bruce

    2008-02-01

    How do human observers perceive visual complexity in images? This problem is especially relevant for computer graphics, where a better understanding of visual complexity can aid in the development of more advanced rendering algorithms. In this paper, we describe a study of the dimensionality of visual complexity in computer graphics scenes. We conducted an experiment where subjects judged the relative complexity of 21 high-resolution scenes, rendered with photorealistic methods. Scenes were gathered from web archives and varied in theme, number and layout of objects, material properties, and lighting. We analyzed the subject responses using multidimensional scaling of pooled subject responses. This analysis embedded the stimulus images in a two-dimensional space, with axes that roughly corresponded to "numerosity" and "material / lighting complexity". In a follow-up analysis, we derived a one-dimensional complexity ordering of the stimulus images. We compared this ordering with several computable complexity metrics, such as scene polygon count and JPEG compression size, and did not find them to be very correlated. Understanding the differences between these measures can lead to the design of more efficient rendering algorithms in computer graphics.

  5. Operational Based Vision Assessment Cone Contrast Test: Description and Operation

    DTIC Science & Technology

    2016-06-01

    designed to detect abnormalities and characterize the contrast sensitivity of the color mechanisms of the human visual system. The OBVA CCT will...than 1, the individual is determined to have an abnormal L-M mechanism. The L-M sensitivity of mildly abnormal individuals (anomalous trichromats...response pads. This hardware is integrated with custom software that generates the stimuli, collects responses, and analyzes the results as outlined in

  6. A transcranial magnetic stimulation study of the effect of visual orientation on the putative human mirror neuron system.

    PubMed

    Burgess, Jed D; Arnold, Sara L; Fitzgibbon, Bernadette M; Fitzgerald, Paul B; Enticott, Peter G

    2013-01-01

    Mirror neurons are a class of motor neuron that are active during both the performance and observation of behavior, and have been implicated in interpersonal understanding. There is evidence to suggest that the mirror response is modulated by the perspective from which an action is presented (e.g., egocentric or allocentric). Most human research, however, has only examined this when presenting intransitive actions. Twenty-three healthy adult participants completed a transcranial magnetic stimulation experiment that assessed corticospinal excitability whilst viewing transitive hand gestures from both egocentric (i.e., self) and allocentric (i.e., other) viewpoints. Although action observation was associated with increases in corticospinal excitability (reflecting putative human mirror neuron activity), there was no effect of visual perspective. These findings are discussed in the context of contemporary theories of mirror neuron ontogeny, including models concerning associative learning and evolutionary adaptation.

  7. The FDA's role in medical device clinical studies of human subjects

    NASA Astrophysics Data System (ADS)

    Saviola, James

    2005-03-01

    This paper provides an overview of the United States Food and Drug Administration's (FDA) role as a regulatory agency in medical device clinical studies involving human subjects. The FDA's regulations and responsibilities are explained and the device application process discussed. The specific medical device regulatory authorities are described as they apply to the development and clinical study of retinal visual prosthetic devices. The FDA medical device regulations regarding clinical studies of human subjects are intended to safeguard the rights and safety of subjects. The data gathered in pre-approval clinical studies provide a basis of valid scientific evidence in order to demonstrate the safety and effectiveness of a medical device. The importance of a working understanding of applicable medical device regulations from the beginning of the device development project is emphasized particularly for novel, complex products such as implantable visual prosthetic devices.

  8. Plasticity in the Human Visual Cortex: An Ophthalmology-Based Perspective

    PubMed Central

    Rosa, Andreia Martins; Silva, Maria Fátima; Murta, Joaquim

    2013-01-01

    Neuroplasticity refers to the ability of the brain to reorganize the function and structure of its connections in response to changes in the environment. Adult human visual cortex shows several manifestations of plasticity, such as perceptual learning and adaptation, working under the top-down influence of attention. Plasticity results from the interplay of several mechanisms, including the GABAergic system, epigenetic factors, mitochondrial activity, and structural remodeling of synaptic connectivity. There is also a downside of plasticity, that is, maladaptive plasticity, in which there are behavioral losses resulting from plasticity changes in the human brain. Understanding plasticity mechanisms could have major implications in the diagnosis and treatment of ocular diseases, such as retinal disorders, cataract and refractive surgery, amblyopia, and in the evaluation of surgical materials and techniques. Furthermore, eliciting plasticity could open new perspectives in the development of strategies that trigger plasticity for better medical and surgical outcomes. PMID:24205505

  9. Sexual motivation is reflected by stimulus-dependent motor cortex excitability.

    PubMed

    Schecklmann, Martin; Engelhardt, Kristina; Konzok, Julian; Rupprecht, Rainer; Greenlee, Mark W; Mokros, Andreas; Langguth, Berthold; Poeppl, Timm B

    2015-08-01

    Sexual behavior involves motivational processes. Findings from both animal models and neuroimaging in humans suggest that the recruitment of neural motor networks is an integral part of the sexual response. However, no study so far has directly linked sexual motivation to physiologically measurable changes in cerebral motor systems in humans. Using transcranial magnetic stimulation in hetero- and homosexual men, we here show that sexual motivation modulates cortical excitability. More specifically, our results demonstrate that visual sexual stimuli corresponding with one's sexual orientation, compared with non-corresponding visual sexual stimuli, increase the excitability of the motor cortex. The reflection of sexual motivation in motor cortex excitability provides evidence for motor preparation processes in sexual behavior in humans. Moreover, such interrelationship links theoretical models and previous neuroimaging findings of sexual behavior. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model

    NASA Astrophysics Data System (ADS)

    Arnow, Thomas L.; Geisler, Wilson S.

    1996-04-01

    A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.

  11. L-/M-cone opponency in visual evoked potentials of human cortex.

    PubMed

    Barboni, Mirella Telles Salgueiro; Nagy, Balázs Vince; Martins, Cristiane Maria Gomes; Bonci, Daniela Maria Oliveria; Hauzman, Einat; Aher, Avinash; Tsai, Tina I; Kremers, Jan; Ventura, Dora Fix

    2017-08-01

    L and M cones send their signals to the cortex using two chromatic (parvocellular and blue-yellow koniocellular) and one luminance (magnocellular) pathways. These pathways contain ON and OFF subpathways that respond to excitation increments and decrements respectively. Here, we report on visually evoked potentials (VEP) recordings that reflect L- and M-cone driven increment (LI and MI) and decrement (LD and MD) activity. VEP recordings were performed on 12 trichromats and four dichromats (two protanopes and two deuteranopes). We found that the responses to LI strongly resembled those to MD, and that LD and MI responses were very similar. Moreover, the lack of a photoreceptor type (L or M) in the dichromats led to a dominance of the ON pathway of the remaining photoreceptor type. These results provide electrophysiological evidence that antagonistic L/M signal processing, already present in the retina and the lateral geniculate nucleus (LGN), is also observed at the visual cortex. These data are in agreement with results from human psychophysics where MI stimuli lead to a perceived brightness decrease whereas LI stimuli resulted in perceived brightness increases. VEP recording is a noninvasive tool that can be easily and painlessly applied. We propose that the technique may provide information in the diagnosis of color vision deficiencies.

  12. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  13. Complex for monitoring visual acuity and its application for evaluation of human psycho-physiological state

    NASA Astrophysics Data System (ADS)

    Sorokoumov, P. S.; Khabibullin, T. R.; Tolstaya, A. M.

    2017-01-01

    The existing psychological theories associate the movement of a human eye with its reactions to external change: what we see, hear and feel. By analyzing the glance, we can compare the external human response (which shows the behavior of a person), and the natural reaction (that they actually feels). This article describes the complex for detection of visual activity and its application for evaluation of the psycho-physiological state of a person. The glasses with a camera capture all the movements of the human eye in real time. The data recorded by the camera are transmitted to the computer for processing implemented with the help of the software developed by the authors. The result is given in an informative and an understandable report, which can be used for further analysis. The complex shows a high efficiency and stable operation and can be used both, for the pedagogic personnel recruitment and for testing students during the educational process.

  14. Learning invariance from natural images inspired by observations in the primary visual cortex.

    PubMed

    Teichmann, Michael; Wiltschut, Jan; Hamker, Fred

    2012-05-01

    The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.

  15. Dissociating Medial Temporal and Striatal Memory Systems With a Same/Different Matching Task: Evidence for Two Neural Systems in Human Recognition.

    PubMed

    Sinha, Neha; Glass, Arnold Lewis

    2017-01-01

    The medial temporal lobe and striatum have both been implicated as brain substrates of memory and learning. Here, we show dissociation between these two memory systems using a same/different matching task, in which subjects judged whether four-letter strings were the same or different. Different RT was determined by the left-to-right location of the first letter different between the study and test string, consistent with a left-to-right comparison of the study and test strings, terminating when a difference was found. This comparison process results in same responses being slower than different responses. Nevertheless, same responses were faster than different responses. Same responses were associated with hippocampus activation. Different responses were associated with both caudate and hippocampus activation. These findings are consistent with the dual-system hypothesis of mammalian memory and extend the model to human visual recognition.

  16. Cell replacement and visual restoration by retinal sheet transplants

    PubMed Central

    Seiler, Magdalene J.; Aramant, Robert B.

    2012-01-01

    Retinal diseases such as age-related macular degeneration (ARMD) and retinitis pigmentosa (RP) affect millions of people. Replacing lost cells with new cells that connect with the still functional part of the host retina might repair a degenerating retina and restore eyesight to an unknown extent. A unique model, subretinal transplantation of freshly dissected sheets of fetal-derived retinal progenitor cells, combined with its retinal pigment epithelium (RPE), has demonstrated successful results in both animals and humans. Most other approaches are restricted to rescue endogenous retinal cells of the recipient in earlier disease stages by a ‘nursing’ role of the implanted cells and are not aimed at neural retinal cell replacement. Sheet transplants restore lost visual responses in several retinal degeneration models in the superior colliculus (SC) corresponding to the location of the transplant in the retina. They do not simply preserve visual performance – they increase visual responsiveness to light. Restoration of visual responses in the SC can be directly traced to neural cells in the transplant, demonstrating that synaptic connections between transplant and host contribute to the visual improvement. Transplant processes invade the inner plexiform layer of the host retina and form synapses with presumable host cells. In a Phase II trial of RP and ARMD patients, transplants of retina together with its RPE improved visual acuity. In summary, retinal progenitor sheet transplantation provides an excellent model to answer questions about how to repair and restore function of a degenerating retina. Supply of fetal donor tissue will always be limited but the model can set a standard and provide an informative base for optimal cell replacement therapies such as embryonic stem cell (ESC)-derived therapy. PMID:22771454

  17. Visual adaptation provides objective electrophysiological evidence of facial identity discrimination.

    PubMed

    Retter, Talia L; Rossion, Bruno

    2016-07-01

    Discrimination of facial identities is a fundamental function of the human brain that is challenging to examine with macroscopic measurements of neural activity, such as those obtained with functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Although visual adaptation or repetition suppression (RS) stimulation paradigms have been successfully implemented to this end with such recording techniques, objective evidence of an identity-specific discrimination response due to adaptation at the level of the visual representation is lacking. Here, we addressed this issue with fast periodic visual stimulation (FPVS) and EEG recording combined with a symmetry/asymmetry adaptation paradigm. Adaptation to one facial identity is induced through repeated presentation of that identity at a rate of 6 images per second (6 Hz) over 10 sec. Subsequently, this identity is presented in alternation with another facial identity (i.e., its anti-face, both faces being equidistant from an average face), producing an identity repetition rate of 3 Hz over a 20 sec testing sequence. A clear EEG response at 3 Hz is observed over the right occipito-temporal (ROT) cortex, indexing discrimination between the two facial identities in the absence of an explicit behavioral discrimination measure. This face identity discrimination occurs immediately after adaptation and disappears rapidly within 20 sec. Importantly, this 3 Hz response is not observed in a control condition without the single-identity 10 sec adaptation period. These results indicate that visual adaptation to a given facial identity produces an objective (i.e., at a pre-defined stimulation frequency) electrophysiological index of visual discrimination between that identity and another, and provides a unique behavior-free quantification of the effect of visual adaptation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Mirror reversal and visual rotation are learned and consolidated via separate mechanisms: recalibrating or learning de novo?

    PubMed

    Telgen, Sebastian; Parvin, Darius; Diedrichsen, Jörn

    2014-10-08

    Motor learning tasks are often classified into adaptation tasks, which involve the recalibration of an existing control policy (the mapping that determines both feedforward and feedback commands), and skill-learning tasks, requiring the acquisition of new control policies. We show here that this distinction also applies to two different visuomotor transformations during reaching in humans: Mirror-reversal (left-right reversal over a mid-sagittal axis) of visual feedback versus rotation of visual feedback around the movement origin. During mirror-reversal learning, correct movement initiation (feedforward commands) and online corrections (feedback responses) were only generated at longer latencies. The earliest responses were directed into a nonmirrored direction, even after two training sessions. In contrast, for visual rotation learning, no dependency of directional error on reaction time emerged, and fast feedback responses to visual displacements of the cursor were immediately adapted. These results suggest that the motor system acquires a new control policy for mirror reversal, which initially requires extra processing time, while it recalibrates an existing control policy for visual rotations, exploiting established fast computational processes. Importantly, memory for visual rotation decayed between sessions, whereas memory for mirror reversals showed offline gains, leading to better performance at the beginning of the second session than in the end of the first. With shifts in time-accuracy tradeoff and offline gains, mirror-reversal learning shares common features with other skill-learning tasks. We suggest that different neuronal mechanisms underlie the recalibration of an existing versus acquisition of a new control policy and that offline gains between sessions are a characteristic of latter. Copyright © 2014 the authors 0270-6474/14/3413768-12$15.00/0.

  19. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    PubMed

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  20. Cerebral Asymmetry of fMRI-BOLD Responses to Visual Stimulation

    PubMed Central

    Hougaard, Anders; Jensen, Bettina Hagström; Amin, Faisal Mohammad; Rostrup, Egill; Hoffmann, Michael B.; Ashina, Messoud

    2015-01-01

    Hemispheric asymmetry of a wide range of functions is a hallmark of the human brain. The visual system has traditionally been thought of as symmetrically distributed in the brain, but a growing body of evidence has challenged this view. Some highly specific visual tasks have been shown to depend on hemispheric specialization. However, the possible lateralization of cerebral responses to a simple checkerboard visual stimulation has not been a focus of previous studies. To investigate this, we performed two sessions of blood-oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) in 54 healthy subjects during stimulation with a black and white checkerboard visual stimulus. While carefully excluding possible non-physiological causes of left-to-right bias, we compared the activation of the left and the right cerebral hemispheres and related this to grey matter volume, handedness, age, gender, ocular dominance, interocular difference in visual acuity, as well as line-bisection performance. We found a general lateralization of cerebral activation towards the right hemisphere of early visual cortical areas and areas of higher-level visual processing, involved in visuospatial attention, especially in top-down (i.e., goal-oriented) attentional processing. This right hemisphere lateralization was partly, but not completely, explained by an increased grey matter volume in the right hemisphere of the early visual areas. Difference in activation of the superior parietal lobule was correlated with subject age, suggesting a shift towards the left hemisphere with increasing age. Our findings suggest a right-hemispheric dominance of these areas, which could lend support to the generally observed leftward visual attentional bias and to the left hemifield advantage for some visual perception tasks. PMID:25985078

  1. Plasticity of the human otolith-ocular reflex

    NASA Technical Reports Server (NTRS)

    Wall, C. 3rd; Smith, T. R.; Furman, J. M.

    1992-01-01

    The eye movement response to earth vertical axis rotation in the dark, a semicircular canal stimulus, can be altered by prior exposure to combined visual-vestibular stimuli. Such plasticity of the vestibulo-ocular reflex has not been described for earth horizontal axis rotation, a dynamic otolith stimulus. Twenty normal human subjects underwent one of two types of adaptation paradigms designed either to attenuate or enhance the gain of the semicircular canal-ocular reflex prior to undergoing otolith-ocular reflex testing with horizontal axis rotation. The adaptation paradigm paired a 0.2 Hz sinusoidal rotation about a vertical axis with a 0.2 Hz optokinetic stripe pattern that was deliberately mismatched in peak velocity. Pre- and post-adaptation horizontal axis rotations were at 60 degrees/s in the dark and produced a modulation in the slow component velocity of nystagmus having a frequency of 0.17 Hz due to putative stimulation of the otolith organs. Results showed that the magnitude of this modulation component response was altered in a manner similar to the alteration in semicircular canal-ocular responses. These results suggest that physiologic alteration of the vestibulo-ocular reflex using deliberately mismatched visual and semicircular canal stimuli induces changes in both canal-ocular and otolith-ocular responses. We postulate, therefore, that central nervous system pathways responsible for controlling the gains of canal-ocular and otolith-ocular reflexes are shared.

  2. Objective evaluation of the visual acuity in human eyes

    NASA Astrophysics Data System (ADS)

    Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.

    2009-08-01

    Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".

  3. Optimal Eye-Gaze Fixation Position for Face-Related Neural Responses

    PubMed Central

    Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina

    2013-01-01

    It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template. PMID:23762224

  4. Optimal eye-gaze fixation position for face-related neural responses.

    PubMed

    Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina

    2013-01-01

    It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.

  5. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  6. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System.

    PubMed

    Roth, Zvi N

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.

  7. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System

    PubMed Central

    Roth, Zvi N.

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455

  8. Signal Analysis of Visual Evoked Responses.

    DTIC Science & Technology

    1983-12-01

    T-IE PRO3LEM The interest of the Air Force was in the study of: initially, animal VERs; and, later, human subject VERs. Fo,. obvious reasons, the...data re- oorded from human subjects were restricted to scalp electrode recordings. By ;ontrast, in the animal preparations, epitural bipolar electrode...left em sherechannel # 1 rigt hemisphere_______ _______ channel # 3 HP HPJ HP bT channel # 4 it 5 #t 6 Figure 6.4. Specification of channels. HP

  9. Body size and shape misperception and visual adaptation: An overview of an emerging research paradigm.

    PubMed

    Challinor, Kirsten L; Mond, Jonathan; Stephen, Ian D; Mitchison, Deborah; Stevenson, Richard J; Hay, Phillipa; Brooks, Kevin R

    2017-12-01

    Although body size and shape misperception (BSSM) is a common feature of anorexia nervosa, bulimia nervosa and muscle dysmorphia, little is known about its underlying neural mechanisms. Recently, a new approach has emerged, based on the long-established non-invasive technique of perceptual adaptation, which allows for inferences about the structure of the neural apparatus responsible for alterations in visual appearance. Here, we describe several recent experimental examples of BSSM, wherein exposure to "extreme" body stimuli causes visual aftereffects of biased perception. The implications of these studies for our understanding of the neural and cognitive representation of human bodies, along with their implications for clinical practice are discussed.

  10. Stereoscopic distance perception

    NASA Technical Reports Server (NTRS)

    Foley, John M.

    1989-01-01

    Limited cue, open-loop tasks in which a human observer indicates distances or relations among distances are discussed. By open-loop tasks, it is meant tasks in which the observer gets no feedback as to the accuracy of the responses. What happens when cues are added and when the loop is closed are considered. The implications of this research for the effectiveness of visual displays is discussed. Errors in visual distance tasks do not necessarily mean that the percept is in error. The error could arise in transformations that intervene between the percept and the response. It is argued that the percept is in error. It is also argued that there exist post-perceptual transformations that may contribute to the error or be modified by feedback to correct for the error.

  11. M.I.T./Canadian vestibular experiments on the Spacelab-1 mission. I - Sensory adaptation to weightlessness and readaptation to one-g: An overview

    NASA Technical Reports Server (NTRS)

    Young, L. R.; Oman, C. M.; Lichtenberg, B. K.; Watt, D. G. D.; Money, K. E.

    1986-01-01

    Human sensory/motor adaptation to weightlessness and readaptation to earth's gravity are assessed. Preflight and postflight vestibular and visual responses for the crew on the Spacelab-1 mission are studied; the effect of the abnormal pattern of otolith afferent signals caused by weightlessness on the pitch and roll perception and postural adjustments of the subjects are examined. It is observed that body position and postural reactions change due to weightlessness in order to utilize the varied sensory inputs in a manner suited to microgravity conditions. The aspects of reinterpretation include: (1) tilt acceleration reinterpretation, (2) reduced postural response to z-axis linear acceleration, and (3) increased attention to visual cues.

  12. How Visual Is the Visual Cortex? Comparing Connectional and Functional Fingerprints between Congenitally Blind and Sighted Individuals.

    PubMed

    Wang, Xiaoying; Peelen, Marius V; Han, Zaizhu; He, Chenxi; Caramazza, Alfonso; Bi, Yanchao

    2015-09-09

    Classical animal visual deprivation studies and human neuroimaging studies have shown that visual experience plays a critical role in shaping the functionality and connectivity of the visual cortex. Interestingly, recent studies have additionally reported circumscribed regions in the visual cortex in which functional selectivity was remarkably similar in individuals with and without visual experience. Here, by directly comparing resting-state and task-based fMRI data in congenitally blind and sighted human subjects, we obtained large-scale continuous maps of the degree to which connectional and functional "fingerprints" of ventral visual cortex depend on visual experience. We found a close agreement between connectional and functional maps, pointing to a strong interdependence of connectivity and function. Visual experience (or the absence thereof) had a pronounced effect on the resting-state connectivity and functional response profile of occipital cortex and the posterior lateral fusiform gyrus. By contrast, connectional and functional fingerprints in the anterior medial and posterior lateral parts of the ventral visual cortex were statistically indistinguishable between blind and sighted individuals. These results provide a large-scale mapping of the influence of visual experience on the development of both functional and connectivity properties of visual cortex, which serves as a basis for the formulation of new hypotheses regarding the functionality and plasticity of specific subregions. Significance statement: How is the functionality and connectivity of the visual cortex shaped by visual experience? By directly comparing resting-state and task-based fMRI data in congenitally blind and sighted subjects, we obtained large-scale continuous maps of the degree to which connectional and functional "fingerprints" of ventral visual cortex depend on visual experience. In addition to revealing regions that are strongly dependent on visual experience (early visual cortex and posterior fusiform gyrus), our results showed regions in which connectional and functional patterns are highly similar in blind and sighted individuals (anterior medial and posterior lateral ventral occipital temporal cortex). These results serve as a basis for the formulation of new hypotheses regarding the functionality and plasticity of specific subregions of the visual cortex. Copyright © 2015 the authors 0270-6474/15/3512545-15$15.00/0.

  13. Anthropological film: a scientific and humanistic resource.

    PubMed

    Soren, E R

    1974-12-20

    More than a scientific endeavor but not strictly one of the humanities either, anthropology stands between these basic kinds of intellectual pursuit, bridging and contributing to both. Not limited to natural history, anthropology touches art, historical process, and human values, drawing from the materials and approaches of both science and humanities. This professional interest in a broad understanding of the human condition has led anthropologists to adapt and use modern cameras and films to inquire further into the variety of ways of life of mankind and to develop method and theory to prepare anthropological film as a permanent scientific and humanistic resource. Until quite recently the evolution of human culture and organization has diverged in the hitherto isolated regions of the world. Now this divergence has virtually ceased; we are witnessing an unprecedented period in human history-one where cultural divergence has turned to cultural convergence and where the varieties of independently evolved expressions of basic human potential are giving way to a single system of modern communications, transport, commerce, and manufacturing technology. Before the varieties of ways of life of the world disappear, they can be preserved in facsimile in anthropological films. As primary, undifferentiated visual information, these films facilitate that early step in the creation of new knowledge which is sometimes called humanistic and without which scientific application lies dormant, lacking an idea to test. In keeping with the two scholarly faces of anthropology, humanistic and scientific, anthropological films may provide material permitting both humanistic insight and the more controlled formulations of science. The lightweight filming equipment recently developed has been adapted by anthropologists as a tool of scholarly visual inquiry; methods of retrieving visual data from changing and vanishing ways of life have been developed; and new ways to reveal human beings to one another by using such visual resources have been explored. As a result, not only can anthropological film records permit continued reexamination of the past human conditions from which the present was shaped, but they also facilitate an ongoing public and scientific review of the dynamics of the human behavioral and social repertoire in relation to the contemporary conditions which pattern human responses and adaptation. How man fits into and copes with the changing world is of vital interest and concern. Visual data provide otherwise unobtainable information on human potential, behavior, and social organization. Such information, fed into the public media, facilitates informed consideration of alternative possibilities. By contributing to a better informed society, such films will help make our future more human and more humane.

  14. Inhibition of voluntary saccadic eye movement commands by abrupt visual onsets.

    PubMed

    Edelman, Jay A; Xu, Kitty Z

    2009-03-01

    Saccadic eye movements are made both to explore the visual world and to react to sudden sensory events. We studied the ability for humans to execute a voluntary (i.e., nonstimulus-driven) saccade command in the face of a suddenly appearing visual stimulus. Subjects were required to make a saccade to a memorized location when a central fixation point disappeared. At varying times relative to fixation point disappearance a visual distractor appeared at a random location. When the distractor appeared at locations distant from the target virtually no saccades were initiated in a 30- to 40-ms interval beginning 70-80 ms after appearance of the distractor. If the distractor was presented slightly earlier relative to saccade initiation then saccades tended to have smaller amplitudes, with velocity profiles suggesting that the distractor terminated them prematurely. In contrast, distractors appearing close to the saccade target elicited express saccade-like movements 70-100 ms after their appearance, although the saccade endpoint was generally scarcely affected by the distractor. An additional experiment showed that these effects were weaker when the saccade was made to a visible target in a delayed task and still weaker when the saccade itself was made in response to the abrupt appearance of a visual stimulus. A final experiment revealed that the effect is smaller, but quite evident, for very small stimuli. These results suggest that the transient component of a visual response can briefly but almost completely suppress a voluntary saccade command, but only when the stimulus evoking that response is distant from the saccade goal.

  15. Transscleral implantation and neurophysiological testing of subretinal polyimide film electrodes in the domestic pig in visual prosthesis development

    NASA Astrophysics Data System (ADS)

    Sachs, Helmut G.; Schanze, Thomas; Brunner, Ursula; Sailer, Heiko; Wiesenack, Christoph

    2005-03-01

    Loss of photoreceptor function is responsible for a variety of blinding diseases, including retinitis pigmentosa. Advances in microtechnology have led to the development of electronic visual prostheses which are currently under investigation for the treatment of human blindness. The design of a subretinal prosthesis requires that the stimulation device should be implantable in the subretinal space of the eye. Current limitations in eye surgery have to be overcome to demonstrate the feasibility of this approach and to determine basic stimulation parameters. Therefore, polyimide film-bound electrodes were implanted in the subretinal space in anaesthetized domestic pigs as a prelude to electrical stimulation in acute experiments. Eight eyes underwent surgery to demonstrate the transscleral implantability of the device. Four of the eight eyes were stimulated electrically. In these four animals the cranium was prepared for epidural recording of evoked visual cortex responses, and stimulation was performed with sequences of current impulses. All eight subretinal implantation procedures were carried out successfully with polyimide film electrodes and each electrode was implanted beneath the outer retina of the posterior pole of the operated eyes. Four eyes were used for neurophysiological testing, involving recordings of epidural cortical responses to light and electrical stimulation. A light stimulus response, which occurred 40 ms after stimulation, proved the integrity of the operated eye. The electrical stimuli occurred about 20 ms after the onset of stimulation. The stimulation threshold was approximately 100 µA. Both the threshold and the cortical responses depended on the correspondence between retinal stimulation and cortical recording sites and on the number of stimulation electrodes used simultaneously. The subretinal implantation of complex stimulation devices using the transscleral procedure with consecutive subretinal stimulation is feasible in acute experiments in an animal model approximating to the situation in humans. The domestic pig is an appropriate animal model for basic testing of subretinal implants. Animal experiments with chronically implanted devices and long-term stimulation are advisable to prepare the field for successful human experiments. The first two authors (H G Sachs and Th Schanze) contributed equally to this paper.

  16. Within-Hemifield Competition in Early Visual Areas Limits the Ability to Track Multiple Objects with Attention

    PubMed Central

    Alvarez, George A.; Cavanagh, Patrick

    2014-01-01

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. PMID:25164651

  17. More than blindsight: Case report of a child with extraordinary visual capacity following perinatal bilateral occipital lobe injury.

    PubMed

    Mundinano, Inaki-Carril; Chen, Juan; de Souza, Mitchell; Sarossy, Marc G; Joanisse, Marc F; Goodale, Melvyn A; Bourne, James A

    2017-11-13

    Injury to the primary visual cortex (V1, striate cortex) and the geniculostriate pathway in adults results in cortical blindness, abolishing conscious visual perception. Early studies by Larry Weiskrantz and colleagues demonstrated that some patients with an occipital-lobe injury exhibited a degree of unconscious vision and visually-guided behaviour within the blind field. A more recent focus has been the observed phenomenon whereby early-life injury to V1 often results in the preservation of visual perception in both monkeys and humans. These findings initiated a concerted effort on multiple fronts, including nonhuman primate studies, to uncover the neural substrate/s of the spared conscious vision. In both adult and early-life cases of V1 injury, evidence suggests the involvement of the Middle Temporal area (MT) of the extrastriate visual cortex, which is an integral component area of the dorsal stream and is also associated with visually-guided behaviors. Because of the limited number of early-life V1 injury cases for humans, the outstanding question in the field is what secondary visual pathways are responsible for this extraordinary capacity? Here we report for the first time a case of a child (B.I.) who suffered a bilateral occipital-lobe injury in the first two weeks postnatally due to medium-chain acyl-Co-A dehydrogenase deficiency. At 6 years of age, B.I. underwent a battery of neurophysiological tests, as well as structural and diffusion MRI and ophthalmic examination at 7 years. Despite the extensive bilateral occipital cortical damage, B.I. has extensive conscious visual abilities, is not blind, and can use vision to navigate his environment. Furthermore, unlike blindsight patients, he can readily and consciously identify happy and neutral faces and colors, tasks associated with ventral stream processing. These findings suggest significant re-routing of visual information. To identify the putative visual pathway/s responsible for this ability, MRI tractography of secondary visual pathways connecting MT with the lateral geniculate nucleus (LGN) and the inferior pulvinar (PI) were analysed. Results revealed an increased PI-MT pathway in the left hemisphere, suggesting that this pulvinar relay could be the neural pathway affording the preserved visual capacity following an early-life lesion of V1. These findings corroborate anatomical evidence from monkeys showing an enhanced PI-MT pathway following an early-life lesion of V1, compared to adults. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Emotion-Induced Trade-Offs in Spatiotemporal Vision

    ERIC Educational Resources Information Center

    Bocanegra, Bruno R.; Zeelenberg, Rene

    2011-01-01

    It is generally assumed that emotion facilitates human vision in order to promote adaptive responses to a potential threat in the environment. Surprisingly, we recently found that emotion in some cases impairs the perception of elementary visual features (Bocanegra & Zeelenberg, 2009b). Here, we demonstrate that emotion improves fast temporal…

  19. Semantic Categorization Precedes Affective Evaluation of Visual Scenes

    ERIC Educational Resources Information Center

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2010-01-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…

  20. Exploring the Decision Landscape: Integration of Human and Natural Systems Using the Driver-Pressure-State-Impact-Response Framework and Dynamic Web Application

    EPA Science Inventory

    Making decisions to increase community or regional sustainability requires a comprehensive understanding of the linkages between environmental, social, and economic systems. We present a visualization tool that can improve decision processes and improve interdisciplinary research...

  1. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  2. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  3. Interpretation of human pointing by African elephants: generalisation and rationality.

    PubMed

    Smet, Anna F; Byrne, Richard W

    2014-11-01

    Factors influencing the abilities of different animals to use cooperative social cues from humans are still unclear, in spite of long-standing interest in the topic. One of the few species that have been found successful at using human pointing is the African elephant (Loxodonta africana); despite few opportunities for learning about pointing, elephants follow a pointing gesture in an object-choice task, even when the pointing signal and experimenter's body position are in conflict, and when the gesture itself is visually subtle. Here, we show that the success of captive African elephants at using human pointing is not restricted to situations where the pointing signal is sustained until the time of choice: elephants followed human pointing even when the pointing gesture was withdrawn before they had responded to it. Furthermore, elephants rapidly generalised their response to a type of social cue they were unlikely to have seen before: pointing with the foot. However, unlike young children, they showed no sign of evaluating the 'rationality' of this novel pointing gesture according to its visual context: that is, whether the experimenter's hands were occupied or not.

  4. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    PubMed

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  5. Multisensory Integration in Non-Human Primates during a Sensory-Motor Task

    PubMed Central

    Lanz, Florian; Moret, Véronique; Rouiller, Eric Michel; Loquet, Gérard

    2013-01-01

    Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate’s model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear. PMID:24319421

  6. The Library of Integrated Network-Based Cellular Signatures NIH Program: System-Level Cataloging of Human Cells Response to Perturbations.

    PubMed

    Keenan, Alexandra B; Jenkins, Sherry L; Jagodnik, Kathleen M; Koplev, Simon; He, Edward; Torre, Denis; Wang, Zichen; Dohlman, Anders B; Silverstein, Moshe C; Lachmann, Alexander; Kuleshov, Maxim V; Ma'ayan, Avi; Stathias, Vasileios; Terryn, Raymond; Cooper, Daniel; Forlin, Michele; Koleti, Amar; Vidovic, Dusica; Chung, Caty; Schürer, Stephan C; Vasiliauskas, Jouzas; Pilarczyk, Marcin; Shamsaei, Behrouz; Fazel, Mehdi; Ren, Yan; Niu, Wen; Clark, Nicholas A; White, Shana; Mahi, Naim; Zhang, Lixia; Kouril, Michal; Reichard, John F; Sivaganesan, Siva; Medvedovic, Mario; Meller, Jaroslaw; Koch, Rick J; Birtwistle, Marc R; Iyengar, Ravi; Sobie, Eric A; Azeloglu, Evren U; Kaye, Julia; Osterloh, Jeannette; Haston, Kelly; Kalra, Jaslin; Finkbiener, Steve; Li, Jonathan; Milani, Pamela; Adam, Miriam; Escalante-Chong, Renan; Sachs, Karen; Lenail, Alex; Ramamoorthy, Divya; Fraenkel, Ernest; Daigle, Gavin; Hussain, Uzma; Coye, Alyssa; Rothstein, Jeffrey; Sareen, Dhruv; Ornelas, Loren; Banuelos, Maria; Mandefro, Berhan; Ho, Ritchie; Svendsen, Clive N; Lim, Ryan G; Stocksdale, Jennifer; Casale, Malcolm S; Thompson, Terri G; Wu, Jie; Thompson, Leslie M; Dardov, Victoria; Venkatraman, Vidya; Matlock, Andrea; Van Eyk, Jennifer E; Jaffe, Jacob D; Papanastasiou, Malvina; Subramanian, Aravind; Golub, Todd R; Erickson, Sean D; Fallahi-Sichani, Mohammad; Hafner, Marc; Gray, Nathanael S; Lin, Jia-Ren; Mills, Caitlin E; Muhlich, Jeremy L; Niepel, Mario; Shamu, Caroline E; Williams, Elizabeth H; Wrobel, David; Sorger, Peter K; Heiser, Laura M; Gray, Joe W; Korkola, James E; Mills, Gordon B; LaBarge, Mark; Feiler, Heidi S; Dane, Mark A; Bucher, Elmar; Nederlof, Michel; Sudar, Damir; Gross, Sean; Kilburn, David F; Smith, Rebecca; Devlin, Kaylyn; Margolis, Ron; Derr, Leslie; Lee, Albert; Pillai, Ajay

    2018-01-24

    The Library of Integrated Network-Based Cellular Signatures (LINCS) is an NIH Common Fund program that catalogs how human cells globally respond to chemical, genetic, and disease perturbations. Resources generated by LINCS include experimental and computational methods, visualization tools, molecular and imaging data, and signatures. By assembling an integrated picture of the range of responses of human cells exposed to many perturbations, the LINCS program aims to better understand human disease and to advance the development of new therapies. Perturbations under study include drugs, genetic perturbations, tissue micro-environments, antibodies, and disease-causing mutations. Responses to perturbations are measured by transcript profiling, mass spectrometry, cell imaging, and biochemical methods, among other assays. The LINCS program focuses on cellular physiology shared among tissues and cell types relevant to an array of diseases, including cancer, heart disease, and neurodegenerative disorders. This Perspective describes LINCS technologies, datasets, tools, and approaches to data accessibility and reusability. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Evaluating Nextgen Closely Spaced Parallel Operations Concepts with Validated Human Performance Models: Flight Deck Guidelines

    NASA Technical Reports Server (NTRS)

    Hooey, Becky Lee; Gore, Brian Francis; Mahlstedt, Eric; Foyle, David C.

    2013-01-01

    The objectives of the current research were to develop valid human performance models (HPMs) of approach and land operations; use these models to evaluate the impact of NextGen Closely Spaced Parallel Operations (CSPO) on pilot performance; and draw conclusions regarding flight deck display design and pilot-ATC roles and responsibilities for NextGen CSPO concepts. This document presents guidelines and implications for flight deck display designs and candidate roles and responsibilities. A companion document (Gore, Hooey, Mahlstedt, & Foyle, 2013) provides complete scenario descriptions and results including predictions of pilot workload, visual attention and time to detect off-nominal events.

  8. Modulation of early cortical processing during divided attention to non-contiguous locations.

    PubMed

    Frey, Hans-Peter; Schmid, Anita M; Murphy, Jeremy W; Molholm, Sophie; Lalor, Edmund C; Foxe, John J

    2014-05-01

    We often face the challenge of simultaneously attending to multiple non-contiguous regions of space. There is ongoing debate as to how spatial attention is divided under these situations. Whereas, for several years, the predominant view was that humans could divide the attentional spotlight, several recent studies argue in favor of a unitary spotlight that rhythmically samples relevant locations. Here, this issue was addressed by the use of high-density electrophysiology in concert with the multifocal m-sequence technique to examine visual evoked responses to multiple simultaneous streams of stimulation. Concurrently, we assayed the topographic distribution of alpha-band oscillatory mechanisms, a measure of attentional suppression. Participants performed a difficult detection task that required simultaneous attention to two stimuli in contiguous (undivided) or non-contiguous parts of space. In the undivided condition, the classic pattern of attentional modulation was observed, with increased amplitude of the early visual evoked response and increased alpha amplitude ipsilateral to the attended hemifield. For the divided condition, early visual responses to attended stimuli were also enhanced, and the observed multifocal topographic distribution of alpha suppression was in line with the divided attention hypothesis. These results support the existence of divided attentional spotlights, providing evidence that the corresponding modulation occurs during initial sensory processing time-frames in hierarchically early visual regions, and that suppressive mechanisms of visual attention selectively target distracter locations during divided spatial attention. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Fear-potentiated startle processing in humans: Parallel fMRI and orbicularis EMG assessment during cue conditioning and extinction.

    PubMed

    Lindner, Katja; Neubert, Jörg; Pfannmöller, Jörg; Lotze, Martin; Hamm, Alfons O; Wendt, Julia

    2015-12-01

    Studying neural networks and behavioral indices such as potentiated startle responses during fear conditioning has a long tradition in both animal and human research. However, most of the studies in humans do not link startle potentiation and neural activity during fear acquisition and extinction. Therefore, we examined startle blink responses measured with electromyography (EMG) and brain activity measured with functional MRI simultaneously during differential conditioning. Furthermore, we combined these behavioral fear indices with brain network activity by analyzing the brain activity evoked by the startle probe stimulus presented during conditioned visual threat and safety cues as well as in the absence of visual stimulation. In line with previous research, we found a fear-induced potentiation of the startle blink responses when elicited during a conditioned threat stimulus and a rapid decline of amygdala activity after an initial differentiation of threat and safety cues in early acquisition trials. Increased activation during processing of threat cues was also found in the anterior insula, the anterior cingulate cortex (ACC), and the periaqueductal gray (PAG). More importantly, our results depict an increase of brain activity to probes presented during threatening in comparison to safety cues indicating an involvement of the anterior insula, the ACC, the thalamus, and the PAG in fear-potentiated startle processing during early extinction trials. Our study underlines that parallel assessment of fear-potentiated startle in fMRI paradigms can provide a helpful method to investigate common and distinct processing pathways in humans and animals and, thus, contributes to translational research. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Visual motor response of crewmen during a simulated 90 day space mission as measured by the critical task battery

    NASA Technical Reports Server (NTRS)

    Allen, R. W.; Jex, H. R.

    1972-01-01

    In order to test various components of a regenerative life support system and to obtain data on the physiological and psychological effects of long-duration exposure to confinement in a space station atmosphere, four carefully screened young men were sealed in space station simulator for 90 days. A tracking test battery was administered during the above experiment. The battery included a clinical test (critical instability task) related to the subject's dynamic time delay, and a conventional steady tracking task, during which dynamic response (describing functions) and performance measures were obtained. Good correlation was noted between the clinical critical instability scores and more detailed tracking parameters such as dynamic time delay and gain-crossover frequency. The comprehensive data base on human operator tracking behavior obtained in this study demonstrate that sophisticated visual-motor response properties can be efficiently and reliably measured over extended periods of time.

  11. Vesicular trafficking of immune mediators in human eosinophils revealed by immunoelectron microscopy.

    PubMed

    Melo, Rossana C N; Weller, Peter F

    2016-10-01

    Electron microscopy (EM)-based techniques are mostly responsible for our current view of cell morphology at the subcellular level and continue to play an essential role in biological research. In cells from the immune system, such as eosinophils, EM has helped to understand how cells package and release mediators involved in immune responses. Ultrastructural investigations of human eosinophils enabled visualization of secretory processes in detail and identification of a robust, vesicular trafficking essential for the secretion of immune mediators via a non-classical secretory pathway associated with secretory (specific) granules. This vesicular system is mainly organized as large tubular-vesicular carriers (Eosinophil Sombrero Vesicles - EoSVs) actively formed in response to cell activation and provides a sophisticated structural mechanism for delivery of granule-stored mediators. In this review, we highlight the application of EM techniques to recognize pools of immune mediators at vesicular compartments and to understand the complex secretory pathway within human eosinophils involved in inflammatory and allergic responses. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Circadian light

    PubMed Central

    2010-01-01

    The present paper reflects a work in progress toward a definition of circadian light, one that should be informed by the thoughtful, century-old evolution of our present definition of light as a stimulus for the human visual system. This work in progress is based upon the functional relationship between optical radiation and its effects on nocturnal melatonin suppression, in large part because the basic data are available in the literature. Discussed here are the fundamental differences between responses by the visual and circadian systems to optical radiation. Brief reviews of photometry, colorimetry, and brightness perception are presented as a foundation for the discussion of circadian light. Finally, circadian light (CLA) and circadian stimulus (CS) calculation procedures based on a published mathematical model of human circadian phototransduction are presented with an example. PMID:20377841

  13. A Bayesian Account of Visual-Vestibular Interactions in the Rod-and-Frame Task.

    PubMed

    Alberts, Bart B G T; de Brouwer, Anouk J; Selen, Luc P J; Medendorp, W Pieter

    2016-01-01

    Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject's head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities.

  14. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  15. Spatial vision in older adults: perceptual changes and neural bases.

    PubMed

    McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N

    2018-05-17

    The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  16. Visualization and classification of physiological failure modes in ensemble hemorrhage simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Song; Pruett, William Andrew; Hester, Robert

    2015-01-01

    In an emergency situation such as hemorrhage, doctors need to predict which patients need immediate treatment and care. This task is difficult because of the diverse response to hemorrhage in human population. Ensemble physiological simulations provide a means to sample a diverse range of subjects and may have a better chance of containing the correct solution. However, to reveal the patterns and trends from the ensemble simulation is a challenging task. We have developed a visualization framework for ensemble physiological simulations. The visualization helps users identify trends among ensemble members, classify ensemble member into subpopulations for analysis, and provide prediction to future events by matching a new patient's data to existing ensembles. We demonstrated the effectiveness of the visualization on simulated physiological data. The lessons learned here can be applied to clinically-collected physiological data in the future.

  17. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    PubMed

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified particular visual features of the mouth region that predicted this valence effect, isolating the specific visual signal that might be driving this neural valence response. Copyright © 2017 the authors 0270-6474/17/379510-09$15.00/0.

  18. Neuronal nonlinearity explains greater visual spatial resolution for darks than lights.

    PubMed

    Kremkow, Jens; Jin, Jianzhong; Komban, Stanley J; Wang, Yushi; Lashgari, Reza; Li, Xiaobing; Jansen, Michael; Zaidi, Qasim; Alonso, Jose-Manuel

    2014-02-25

    Astronomers and physicists noticed centuries ago that visual spatial resolution is higher for dark than light stimuli, but the neuronal mechanisms for this perceptual asymmetry remain unknown. Here we demonstrate that the asymmetry is caused by a neuronal nonlinearity in the early visual pathway. We show that neurons driven by darks (OFF neurons) increase their responses roughly linearly with luminance decrements, independent of the background luminance. However, neurons driven by lights (ON neurons) saturate their responses with small increases in luminance and need bright backgrounds to approach the linearity of OFF neurons. We show that, as a consequence of this difference in linearity, receptive fields are larger in ON than OFF thalamic neurons, and cortical neurons are more strongly driven by darks than lights at low spatial frequencies. This ON/OFF asymmetry in linearity could be demonstrated in the visual cortex of cats, monkeys, and humans and in the cat visual thalamus. Furthermore, in the cat visual thalamus, we show that the neuronal nonlinearity is present at the ON receptive field center of ON-center neurons and ON receptive field surround of OFF-center neurons, suggesting an origin at the level of the photoreceptor. These results demonstrate a fundamental difference in visual processing between ON and OFF channels and reveal a competitive advantage for OFF neurons over ON neurons at low spatial frequencies, which could be important during cortical development when retinal images are blurred by immature optics in infant eyes.

  19. Spatiotemporal dynamics of similarity-based neural representations of facial identity.

    PubMed

    Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2017-01-10

    Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.

  20. Human photosensitivity: from pathophysiology to treatment.

    PubMed

    Verrotti, A; Tocco, A M; Salladini, C; Latini, G; Chiarelli, F

    2005-11-01

    Photosensitivity is a condition detected on the electroencephalography (EEG) as a paroxysmal reaction to Intermittent Photic Stimulation (IPS). This EEG response, elicited by IPS or by other visual stimuli of daily life, is called Photo Paroxysmal Response (PPR). PPRs are well documented in epileptic and non-epileptic subjects. Photosensitivity rarely in normal individuals evolves into epilepsy. Photosensitive epilepsy is a rare refex epilepsy characterized by seizures in photosensitive individuals. The development of modern technology has increased the exposition to potential seizure precipitants in people of all ages, but especially in children and adolescents. Actually, videogames, computers and televisions are the most common triggers in daily life of susceptible persons. The mechanisms of generation of PPR are poorly understood, but genetic factors play an important rule. The control of visually induced seizures has, generally a good prognosis. In patients known to be visually sensitive, avoidance of obvious source and stimulus modifications are very important and useful to seizure prevention, but in the large majority of patients with epilepsy and photosensitivity antiepileptic drugs are needed.

  1. Applications of Phase-Based Motion Processing

    NASA Technical Reports Server (NTRS)

    Branch, Nicholas A.; Stewart, Eric C.

    2018-01-01

    Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.

  2. Neuronal integration in visual cortex elevates face category tuning to conscious face perception

    PubMed Central

    Fahrenfort, Johannes J.; Snijders, Tineke M.; Heinen, Klaartje; van Gaal, Simon; Scholte, H. Steven; Lamme, Victor A. F.

    2012-01-01

    The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning. PMID:23236162

  3. Integration trumps selection in object recognition.

    PubMed

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Integration trumps selection in object recognition

    PubMed Central

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  5. The Responsiveness of Biological Motion Processing Areas to Selective Attention Towards Goals

    PubMed Central

    Herrington, John; Nymberg, Charlotte; Faja, Susan; Price, Elinora; Schultz, Robert

    2012-01-01

    A growing literature indicates that visual cortex areas viewed as primarily responsive to exogenous stimuli are susceptible to top-down modulation by selective attention. The present study examines whether brain areas involved in biological motion perception are among these areas – particularly with respect to selective attention towards human movement goals. Fifteen participants completed a point-light biological motion study following a two-by-two factorial design, with one factor representing an exogenous manipulation of human movement goals (goal-directed versus random movement), and the other an endogenous manipulation (a goal identification task versus an ancillary color-change task). Both manipulations yielded increased activation in the human homologue of motion-sensitive area MT+ (hMT+) as well as the extrastriate body area (EBA). The endogenous manipulation was associated with increased right posterior superior temporal sulcus (STS) activation, whereas the exogenous manipulation was associated with increased activation in left posterior STS. Selective attention towards goals activated portion of left hMT+/EBA only during the perception of purposeful movement consistent with emerging theories associating this area with the matching of visual motion input to known goal-directed actions. The overall pattern of results indicates that attention towards the goals of human movement activates biological motion areas. Ultimately, selective attention may explain why some studies examining biological motion show activation in hMT+ and EBA, even when using control stimuli with comparable motion properties. PMID:22796987

  6. Spatially invariant coding of numerical information in functionally defined subregions of human parietal cortex.

    PubMed

    Eger, E; Pinel, P; Dehaene, S; Kleinschmidt, A

    2015-05-01

    Macaque electrophysiology has revealed neurons responsive to number in lateral (LIP) and ventral (VIP) intraparietal areas. Recently, fMRI pattern recognition revealed information discriminative of individual numbers in human parietal cortex but without precisely localizing the relevant sites or testing for subregions with different response profiles. Here, we defined the human functional equivalents of LIP (feLIP) and VIP (feVIP) using neurophysiologically motivated localizers. We applied multivariate pattern recognition to investigate whether both regions represent numerical information and whether number codes are position specific or invariant. In a delayed number comparison paradigm with laterally presented numerosities, parietal cortex discriminated between numerosities better than early visual cortex, and discrimination generalized across hemifields in parietal, but not early visual cortex. Activation patterns in the 2 parietal regions of interest did not differ in the coding of position-specific or position-independent number information, but in the expression of a numerical distance effect which was more pronounced in feLIP. Thus, the representation of number in parietal cortex is at least partially position invariant. Both feLIP and feVIP contain information about individual numerosities in humans, but feLIP hosts a coarser representation of numerosity than feVIP, compatible with either broader tuning or a summation code. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Visual performance-based image enhancement methodology: an investigation of contrast enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.

    2006-05-01

    While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.

  8. Perceived Synchrony of Frog Multimodal Signal Components Is Influenced by Content and Order.

    PubMed

    Taylor, Ryan C; Page, Rachel A; Klein, Barrett A; Ryan, Michael J; Hunter, Kimberly L

    2017-10-01

    Multimodal signaling is common in communication systems. Depending on the species, individual signal components may be produced synchronously as a result of physiological constraint (fixed) or each component may be produced independently (fluid) in time. For animals that rely on fixed signals, a basic prediction is that asynchrony between the components should degrade the perception of signal salience, reducing receiver response. Male túngara frogs, Physalaemus pustulosus, produce a fixed multisensory courtship signal by vocalizing with two call components (whines and chucks) and inflating a vocal sac (visual component). Using a robotic frog, we tested female responses to variation in the temporal arrangement between acoustic and visual components. When the visual component lagged a complex call (whine + chuck), females largely rejected this asynchronous multisensory signal in favor of the complex call absent the visual cue. When the chuck component was removed from one call, but the robofrog inflation lagged the complex call, females responded strongly to the asynchronous multimodal signal. When the chuck component was removed from both calls, females reversed preference and responded positively to the asynchronous multisensory signal. When the visual component preceded the call, females responded as often to the multimodal signal as to the call alone. These data show that asynchrony of a normally fixed signal does reduce receiver responsiveness. The magnitude and overall response, however, depend on specific temporal interactions between the acoustic and visual components. The sensitivity of túngara frogs to lagging visual cues, but not leading ones, and the influence of acoustic signal content on the perception of visual asynchrony is similar to those reported in human psychophysics literature. Virtually all acoustically communicating animals must conduct auditory scene analyses and identify the source of signals. Our data suggest that some basic audiovisual neural integration processes may be at work in the vertebrate brain. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology 2017. This work is written by US Government employees and is in the public domain in the US.

  9. Selective Activation of the Deep Layers of the Human Primary Visual Cortex by Top-Down Feedback.

    PubMed

    Kok, Peter; Bains, Lauren J; van Mourik, Tim; Norris, David G; de Lange, Floris P

    2016-02-08

    In addition to bottom-up input, the visual cortex receives large amounts of feedback from other cortical areas [1-3]. One compelling example of feedback activation of early visual neurons in the absence of bottom-up input occurs during the famous Kanizsa illusion, where a triangular shape is perceived, even in regions of the image where there is no bottom-up visual evidence for it. This illusion increases the firing activity of neurons in the primary visual cortex with a receptive field on the illusory contour [4]. Feedback signals are largely segregated from feedforward signals within each cortical area, with feedforward signals arriving in the middle layer, while top-down feedback avoids the middle layers and predominantly targets deep and superficial layers [1, 2, 5, 6]. Therefore, the feedback-mediated activity increase in V1 during the perception of illusory shapes should lead to a specific laminar activity profile that is distinct from the activity elicited by bottom-up stimulation. Here, we used fMRI at high field (7 T) to empirically test this hypothesis, by probing the cortical response to illusory figures in human V1 at different cortical depths [7-14]. We found that, whereas bottom-up stimulation activated all cortical layers, feedback activity induced by illusory figures led to a selective activation of the deep layers of V1. These results demonstrate the potential for non-invasive recordings of neural activity with laminar specificity in humans and elucidate the role of top-down signals during perceptual processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    PubMed Central

    Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.

    2011-01-01

    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562

  11. Population Response Profiles in Early Visual Cortex Are Biased in Favor of More Valuable Stimuli

    PubMed Central

    Saproo, Sameer

    2010-01-01

    Voluntary and stimulus-driven shifts of attention can modulate the representation of behaviorally relevant stimuli in early areas of visual cortex. In turn, attended items are processed faster and more accurately, facilitating the selection of appropriate behavioral responses. Information processing is also strongly influenced by past experience and recent studies indicate that the learned value of a stimulus can influence relatively late stages of decision making such as the process of selecting a motor response. However, the learned value of a stimulus can also influence the magnitude of cortical responses in early sensory areas such as V1 and S1. These early effects of stimulus value are presumed to improve the quality of sensory representations; however, the nature of these modulations is not clear. They could reflect nonspecific changes in response amplitude associated with changes in general arousal or they could reflect a bias in population responses so that high-value features are represented more robustly. To examine this issue, subjects performed a two-alternative forced choice paradigm with a variable-interval payoff schedule to dynamically manipulate the relative value of two stimuli defined by their orientation (one was rotated clockwise from vertical, the other counterclockwise). Activation levels in visual cortex were monitored using functional MRI and feature-selective voxel tuning functions while subjects performed the behavioral task. The results suggest that value not only modulates the relative amplitude of responses in early areas of human visual cortex, but also sharpens the response profile across the populations of feature-selective neurons that encode the critical stimulus feature (orientation). Moreover, changes in space- or feature-based attention cannot easily explain the results because representations of both the selected and the unselected stimuli underwent a similar feature-selective modulation. This sharpening in the population response profile could theoretically improve the probability of correctly discriminating high-value stimuli from low-value alternatives. PMID:20410360

  12. Binocular coordination in response to stereoscopic stimuli

    NASA Astrophysics Data System (ADS)

    Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.

    2009-02-01

    Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.

  13. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    PubMed Central

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  14. Asymmetrical Cortical Processing of Radial Expansioncontraction in Infants and Adults

    ERIC Educational Resources Information Center

    Shirai, Nobu; Birtles, Deirdre; Wattam-Bell, John; Yamaguchi, Masami K.; Kanazawa, So; Atkinson, Janette; Braddick, Oliver

    2009-01-01

    We report asymmetrical cortical responses (steady-state visual evoked potentials) to radial expansion and contraction in human infants and adults. Forty-four infants (22 3-month-olds and 22 4-month-olds) and nine adults viewed dynamic dot patterns which cyclically (2.1 Hz) alternate between radial expansion (or contraction) and random directional…

  15. Invisibility of Blackness: Visual Responses of Kerry James Marshall

    ERIC Educational Resources Information Center

    Whitehead, Jessie L.

    2009-01-01

    "Invisible" is defined as (a) unable to be seen, and (b) treated as if unable to be seen; ignored (http://www.askoxford.com/concise_oed/invisible). "Black" is described as (a) of the very darkest color, and (b) relating to a human group having dark-coloured skin, especially of African or Australian Aboriginal ancestry…

  16. Speed in Information Processing with a Computer Driven Visual Display in a Real-time Digital Simulation. M.S. Thesis - Virginia Polytechnic Inst.

    NASA Technical Reports Server (NTRS)

    Kyle, R. G.

    1972-01-01

    Information transfer between the operator and computer-generated display systems is an area where the human factors engineer discovers little useful design data relating human performance to system effectiveness. This study utilized a computer-driven, cathode-ray-tube graphic display to quantify human response speed in a sequential information processing task. The performance criteria was response time to sixteen cell elements of a square matrix display. A stimulus signal instruction specified selected cell locations by both row and column identification. An equal probable number code, from one to four, was assigned at random to the sixteen cells of the matrix and correspondingly required one of four, matched keyed-response alternatives. The display format corresponded to a sequence of diagnostic system maintenance events, that enable the operator to verify prime system status, engage backup redundancy for failed subsystem components, and exercise alternate decision-making judgements. The experimental task bypassed the skilled decision-making element and computer processing time, in order to determine a lower bound on the basic response speed for given stimulus/response hardware arrangement.

  17. Visual and acoustic communication in non-human animals: a comparison.

    PubMed

    Rosenthal, G G; Ryan, M J

    2000-09-01

    The visual and auditory systems are two major sensory modalities employed in communication. Although communication in these two sensory modalities can serve analogous functions and evolve in response to similar selection forces, the two systems also operate under different constraints imposed by the environment and the degree to which these sensory modalities are recruited for non-communication functions. Also, the research traditions in each tend to differ, with studies of mechanisms of acoustic communication tending to take a more reductionist tack often concentrating on single signal parameters, and studies of visual communication tending to be more concerned with multivariate signal arrays in natural environments and higher level processing of such signals. Each research tradition would benefit by being more expansive in its approach.

  18. Distinct spatial frequency sensitivities for processing faces and emotional expressions.

    PubMed

    Vuilleumier, Patrik; Armony, Jorge L; Driver, Jon; Dolan, Raymond J

    2003-06-01

    High and low spatial frequency information in visual images is processed by distinct neural channels. Using event-related functional magnetic resonance imaging (fMRI) in humans, we show dissociable roles of such visual channels for processing faces and emotional fearful expressions. Neural responses in fusiform cortex, and effects of repeating the same face identity upon fusiform activity, were greater with intact or high-spatial-frequency face stimuli than with low-frequency faces, regardless of emotional expression. In contrast, amygdala responses to fearful expressions were greater for intact or low-frequency faces than for high-frequency faces. An activation of pulvinar and superior colliculus by fearful expressions occurred specifically with low-frequency faces, suggesting that these subcortical pathways may provide coarse fear-related inputs to the amygdala.

  19. Neuroimaging Evidence of a Bilateral Representation for Visually Presented Numbers.

    PubMed

    Grotheer, Mareike; Herrmann, Karl-Heinz; Kovács, Gyula

    2016-01-06

    The clustered architecture of the brain for different visual stimulus categories is one of the most fascinating topics in the cognitive neurosciences. Interestingly, recent research suggests the existence of additional regions for newly acquired stimuli such as letters (letter form area; LFA; Thesen et al., 2012) and numbers (visual number form area; NFA; Shum et al., 2013). However, neuroimaging methods thus far have failed to visualize the NFA in healthy participants, likely due to fMRI signal dropout caused by the air/bone interface of the petrous bone (Shum et al., 2013). In the current study, we combined a 64-channel head coil with high spatial resolution, localized shimming, and liberal smoothing, thereby decreasing the signal dropout and increasing the temporal signal-to-noise ratio in the neighborhood of the NFA. We presented subjects with numbers, letters, false numbers, false letters, objects and their Fourier randomized versions. A group analysis showed significant activations in the inferior temporal gyrus at the previously proposed location of the NFA. Crucially, we found the NFA to be present in both hemispheres. Further, we could identify the NFA on the single-subject level in most of our participants. A detailed analysis of the response profile of the NFA in two separate experiments confirmed the whole-brain results since responses to numbers were significantly higher than to any other presented stimulus in both hemispheres. Our results show for the first time the existence and stimulus selectivity of the NFA in the healthy human brain. This fMRI study shows for the first time a cluster of neurons selective for visually presented numbers in healthy human adults. This visual number form area (NFA) was found in both hemispheres. Crucially, numbers have gained importance for humans too recently for neuronal specialization to be established by evolution. Therefore, investigations of this region will greatly advance our understanding of learning and plasticity in the brain. In addition, these results will aid our knowledge regarding related neurological illnesses (e.g., dyscalculia). To overcome the fMRI signal dropout in the neighborhood of the NFA, we combined high spatial resolution with liberal smoothing. We believe that this approach will be useful to the broad neuroimaging community. Copyright © 2016 the authors 0270-6474/16/360088-10$15.00/0.

  20. Rise and fall of the two visual systems theory.

    PubMed

    Rossetti, Yves; Pisella, Laure; McIntosh, Robert D

    2017-06-01

    Among the many dissociations describing the visual system, the dual theory of two visual systems, respectively dedicated to perception and action, has yielded a lot of support. There are psychophysical, anatomical and neuropsychological arguments in favor of this theory. Several behavioral studies that used sensory and motor psychophysical parameters observed differences between perceptive and motor responses. The anatomical network of the visual system in the non-human primate was very readily organized according to two major pathways, dorsal and ventral. Neuropsychological studies, exploring optic ataxia and visual agnosia as characteristic deficits of these two pathways, led to the proposal of a functional double dissociation between visuomotor and visual perceptual functions. After a major wave of popularity that promoted great advances, particularly in knowledge of visuomotor functions, the guiding theory is now being reconsidered. Firstly, the idea of a double dissociation between optic ataxia and visual form agnosia, as cleanly separating visuomotor from visual perceptual functions, is no longer tenable; optic ataxia does not support a dissociation between perception and action and might be more accurately viewed as a negative image of action blindsight. Secondly, dissociations between perceptive and motor responses highlighted in the framework of this theory concern a very elementary level of action, even automatically guided action routines. Thirdly, the very rich interconnected network of the visual brain yields few arguments in favor of a strict perception/action dissociation. Overall, the dissociation between motor function and perceptive function explored by these behavioral and neuropsychological studies can help define an automatic level of action organization deficient in optic ataxia and preserved in action blindsight, and underlines the renewed need to consider the perception-action circle as a functional ensemble. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  1. Within-hemifield competition in early visual areas limits the ability to track multiple objects with attention.

    PubMed

    Störmer, Viola S; Alvarez, George A; Cavanagh, Patrick

    2014-08-27

    It is much easier to divide attention across the left and right visual hemifields than within the same visual hemifield. Here we investigate whether this benefit of dividing attention across separate visual fields is evident at early cortical processing stages. We measured the steady-state visual evoked potential, an oscillatory response of the visual cortex elicited by flickering stimuli, of moving targets and distractors while human observers performed a tracking task. The amplitude of responses at the target frequencies was larger than that of the distractor frequencies when participants tracked two targets in separate hemifields, indicating that attention can modulate early visual processing when it is divided across hemifields. However, these attentional modulations disappeared when both targets were tracked within the same hemifield. These effects were not due to differences in task performance, because accuracy was matched across the tracking conditions by adjusting target speed (with control conditions ruling out effects due to speed alone). To investigate later processing stages, we examined the P3 component over central-parietal scalp sites that was elicited by the test probe at the end of the trial. The P3 amplitude was larger for probes on targets than on distractors, regardless of whether attention was divided across or within a hemifield, indicating that these higher-level processes were not constrained by visual hemifield. These results suggest that modulating early processing stages enables more efficient target tracking, and that within-hemifield competition limits the ability to modulate multiple target representations within the hemifield maps of the early visual cortex. Copyright © 2014 the authors 0270-6474/14/3311526-08$15.00/0.

  2. A Rapid Subcortical Amygdala Route for Faces Irrespective of Spatial Frequency and Emotion.

    PubMed

    McFadyen, Jessica; Mermillod, Martial; Mattingley, Jason B; Halász, Veronika; Garrido, Marta I

    2017-04-05

    There is significant controversy over the existence and function of a direct subcortical visual pathway to the amygdala. It is thought that this pathway rapidly transmits low spatial frequency information to the amygdala independently of the cortex, and yet the directionality of this function has never been determined. We used magnetoencephalography to measure neural activity while human participants discriminated the gender of neutral and fearful faces filtered for low or high spatial frequencies. We applied dynamic causal modeling to demonstrate that the most likely underlying neural network consisted of a pulvinar-amygdala connection that was uninfluenced by spatial frequency or emotion, and a cortical-amygdala connection that conveyed high spatial frequencies. Crucially, data-driven neural simulations revealed a clear temporal advantage of the subcortical connection over the cortical connection in influencing amygdala activity. Thus, our findings support the existence of a rapid subcortical pathway that is nonselective in terms of the spatial frequency or emotional content of faces. We propose that that the "coarseness" of the subcortical route may be better reframed as "generalized." SIGNIFICANCE STATEMENT The human amygdala coordinates how we respond to biologically relevant stimuli, such as threat or reward. It has been postulated that the amygdala first receives visual input via a rapid subcortical route that conveys "coarse" information, namely, low spatial frequencies. For the first time, the present paper provides direction-specific evidence from computational modeling that the subcortical route plays a generalized role in visual processing by rapidly transmitting raw, unfiltered information directly to the amygdala. This calls into question a widely held assumption across human and animal research that fear responses are produced faster by low spatial frequencies. Our proposed mechanism suggests organisms quickly generate fear responses to a wide range of visual properties, heavily implicating future research on anxiety-prevention strategies. Copyright © 2017 the authors 0270-6474/17/373864-11$15.00/0.

  3. Parietal blood oxygenation level-dependent response evoked by covert visual search reflects set-size effect in monkeys.

    PubMed

    Atabaki, A; Marciniak, K; Dicke, P W; Karnath, H-O; Thier, P

    2014-03-01

    Distinguishing a target from distractors during visual search is crucial for goal-directed behaviour. The more distractors that are presented with the target, the larger is the subject's error rate. This observation defines the set-size effect in visual search. Neurons in areas related to attention and eye movements, like the lateral intraparietal area (LIP) and frontal eye field (FEF), diminish their firing rates when the number of distractors increases, in line with the behavioural set-size effect. Furthermore, human imaging studies that have tried to delineate cortical areas modulating their blood oxygenation level-dependent (BOLD) response with set size have yielded contradictory results. In order to test whether BOLD imaging of the rhesus monkey cortex yields results consistent with the electrophysiological findings and, moreover, to clarify if additional other cortical regions beyond the two hitherto implicated are involved in this process, we studied monkeys while performing a covert visual search task. When varying the number of distractors in the search task, we observed a monotonic increase in error rates when search time was kept constant as was expected if monkeys resorted to a serial search strategy. Visual search consistently evoked robust BOLD activity in the monkey FEF and a region in the intraparietal sulcus in its lateral and middle part, probably involving area LIP. Whereas the BOLD response in the FEF did not depend on set size, the LIP signal increased in parallel with set size. These results demonstrate the virtue of BOLD imaging in monkeys when trying to delineate cortical areas underlying a cognitive process like visual search. However, they also demonstrate the caution needed when inferring neural activity from BOLD activity. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. A flexible pressure responsive device based on the interaction between silver nanoparticles and an aluminum reflector

    NASA Astrophysics Data System (ADS)

    Rankin, Alasdair; McGarry, Steven

    2018-01-01

    The unique and tunable optical properties of metal nanoparticles have attracted intense and sustained academic attention in recent years. In tandem with the demand for low-cost responsive materials, one particular topic of interest is the development of mechanically responsive device structures. This work describes the design, fabrication, and testing of a mechanically responsive plasmonic device structure that has been integrated onto a standard commercial plastic substrate. With a low actuation force and a visually perceivable color shift, this device would be attractive for applications requiring responsive features that can be activated by the human hand.

  5. Emotion Evaluation and Response Slowing in a Non-Human Primate: New Directions for Cognitive Bias Measures of Animal Emotion?

    PubMed Central

    Bethell, Emily J.; Holmes, Amanda; MacLarnon, Ann; Semple, Stuart

    2016-01-01

    The cognitive bias model of animal welfare assessment is informed by studies with humans demonstrating that the interaction between emotion and cognition can be detected using laboratory tasks. A limitation of cognitive bias tasks is the amount of training required by animals prior to testing. A potential solution is to use biologically relevant stimuli that trigger innate emotional responses. Here; we develop a new method to assess emotion in rhesus macaques; informed by paradigms used with humans: emotional Stroop; visual cueing and; in particular; response slowing. In humans; performance on a simple cognitive task can become impaired when emotional distractor content is displayed. Importantly; responses become slower in anxious individuals in the presence of mild threat; a pattern not seen in non-anxious individuals; who are able to effectively process and disengage from the distractor. Here; we present a proof-of-concept study; demonstrating that rhesus macaques show slowing of responses in a simple touch-screen task when emotional content is introduced; but only when they had recently experienced a presumably stressful veterinary inspection. Our results indicate the presence of a subtle “cognitive freeze” response; the measurement of which may provide a means of identifying negative shifts in emotion in animals. PMID:26761035

  6. Methods for Multiloop Identification of Visual and Neuromuscular Pilot Responses.

    PubMed

    Olivari, Mario; Nieuwenhuizen, Frank M; Venrooij, Joost; Bülthoff, Heinrich H; Pollini, Lorenzo

    2015-12-01

    In this paper, identification methods are proposed to estimate the neuromuscular and visual responses of a multiloop pilot model. A conventional and widely used technique for simultaneous identification of the neuromuscular and visual systems makes use of cross-spectral density estimates. This paper shows that this technique requires a specific noninterference hypothesis, often implicitly assumed, that may be difficult to meet during actual experimental designs. A mathematical justification of the necessity of the noninterference hypothesis is given. Furthermore, two methods are proposed that do not have the same limitations. The first method is based on autoregressive models with exogenous inputs, whereas the second one combines cross-spectral estimators with interpolation in the frequency domain. The two identification methods are validated by offline simulations and contrasted to the classic method. The results reveal that the classic method fails when the noninterference hypothesis is not fulfilled; on the contrary, the two proposed techniques give reliable estimates. Finally, the three identification methods are applied to experimental data from a closed-loop control task with pilots. The two proposed techniques give comparable estimates, different from those obtained by the classic method. The differences match those found with the simulations. Thus, the two identification methods provide a good alternative to the classic method and make it possible to simultaneously estimate human's neuromuscular and visual responses in cases where the classic method fails.

  7. Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.

    PubMed

    Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming

    2017-10-20

    Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Dual-color plasmonic enzyme-linked immunosorbent assay based on enzyme-mediated etching of Au nanoparticles

    NASA Astrophysics Data System (ADS)

    Guo, Longhua; Xu, Shaohua; Ma, Xiaoming; Qiu, Bin; Lin, Zhenyu; Chen, Guonan

    2016-09-01

    Colorimetric enzyme-linked immunosorbent assay utilizing 3‧-3-5‧-5-tetramethylbenzidine(TMB) as the chromogenic substrate has been widely used in the hospital for the detection of all kinds of disease biomarkers. Herein, we demonstrate a strategy to change this single-color display into dual-color responses to improve the accuracy of visual inspection. Our investigation firstly reveals that oxidation state of 3‧-3-5‧-5-tetramethylbenzidine (TMB2+) can quantitatively etch gold nanoparticles. Therefore, the incorporation of gold nanoparticles into a commercial TMB-based ELISA kit could generate dual-color responses: the solution color varied gradually from wine red (absorption peak located at ~530 nm) to colorless, and then from colorless to yellow (absorption peak located at ~450 nm) with the increase amount of targets. These dual-color responses effectively improved the sensitivity as well as the accuracy of visual inspection. For example, the proposed dual-color plasmonic ELISA is demonstrated for the detection of prostate-specific antigen (PSA) in human serum with a visual limit of detection (LOD) as low as 0.0093 ng/mL.

  9. Serial dependence in the perception of attractiveness.

    PubMed

    Xia, Ye; Leib, Allison Yamanashi; Whitney, David

    2016-12-01

    The perception of attractiveness is essential for choices of food, object, and mate preference. Like perception of other visual features, perception of attractiveness is stable despite constant changes of image properties due to factors like occlusion, visual noise, and eye movements. Recent results demonstrate that perception of low-level stimulus features and even more complex attributes like human identity are biased towards recent percepts. This effect is often called serial dependence. Some recent studies have suggested that serial dependence also exists for perceived facial attractiveness, though there is also concern that the reported effects are due to response bias. Here we used an attractiveness-rating task to test the existence of serial dependence in perceived facial attractiveness. Our results demonstrate that perceived face attractiveness was pulled by the attractiveness level of facial images encountered up to 6 s prior. This effect was not due to response bias and did not rely on the previous motor response. This perceptual pull increased as the difference in attractiveness between previous and current stimuli increased. Our results reconcile previously conflicting findings and extend previous work, demonstrating that sequential dependence in perception operates across different levels of visual analysis, even at the highest levels of perceptual interpretation.

  10. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  11. Absence of visual experience modifies the neural basis of numerical thinking

    PubMed Central

    Kanjlia, Shipra; Lane, Connor; Feigenson, Lisa; Bedny, Marina

    2016-01-01

    In humans, the ability to reason about mathematical quantities depends on a frontoparietal network that includes the intraparietal sulcus (IPS). How do nature and nurture give rise to the neurobiology of numerical cognition? We asked how visual experience shapes the neural basis of numerical thinking by studying numerical cognition in congenitally blind individuals. Blind (n = 17) and blindfolded sighted (n = 19) participants solved math equations that varied in difficulty (e.g., 27 − 12 = x vs. 7 − 2 = x), and performed a control sentence comprehension task while undergoing fMRI. Whole-cortex analyses revealed that in both blind and sighted participants, the IPS and dorsolateral prefrontal cortices were more active during the math task than the language task, and activity in the IPS increased parametrically with equation difficulty. Thus, the classic frontoparietal number network is preserved in the total absence of visual experience. However, surprisingly, blind but not sighted individuals additionally recruited a subset of early visual areas during symbolic math calculation. The functional profile of these “visual” regions was identical to that of the IPS in blind but not sighted individuals. Furthermore, in blindness, number-responsive visual cortices exhibited increased functional connectivity with prefrontal and IPS regions that process numbers. We conclude that the frontoparietal number network develops independently of visual experience. In blindness, this number network colonizes parts of deafferented visual cortex. These results suggest that human cortex is highly functionally flexible early in life, and point to frontoparietal input as a mechanism of cross-modal plasticity in blindness. PMID:27638209

  12. Comparing visual representations across human fMRI and computational vision

    PubMed Central

    Leeds, Daniel D.; Seibert, Darren A.; Pyles, John A.; Tarr, Michael J.

    2013-01-01

    Feedforward visual object perception recruits a cortical network that is assumed to be hierarchical, progressing from basic visual features to complete object representations. However, the nature of the intermediate features related to this transformation remains poorly understood. Here, we explore how well different computer vision recognition models account for neural object encoding across the human cortical visual pathway as measured using fMRI. These neural data, collected during the viewing of 60 images of real-world objects, were analyzed with a searchlight procedure as in Kriegeskorte, Goebel, and Bandettini (2006): Within each searchlight sphere, the obtained patterns of neural activity for all 60 objects were compared to model responses for each computer recognition algorithm using representational dissimilarity analysis (Kriegeskorte et al., 2008). Although each of the computer vision methods significantly accounted for some of the neural data, among the different models, the scale invariant feature transform (Lowe, 2004), encoding local visual properties gathered from “interest points,” was best able to accurately and consistently account for stimulus representations within the ventral pathway. More generally, when present, significance was observed in regions of the ventral-temporal cortex associated with intermediate-level object perception. Differences in model effectiveness and the neural location of significant matches may be attributable to the fact that each model implements a different featural basis for representing objects (e.g., more holistic or more parts-based). Overall, we conclude that well-known computer vision recognition systems may serve as viable proxies for theories of intermediate visual object representation. PMID:24273227

  13. Different Signal Enhancement Pathways of Attention and Consciousness Underlie Perception in Humans.

    PubMed

    van Boxtel, Jeroen J A

    2017-06-14

    It is not yet known whether attention and consciousness operate through similar or largely different mechanisms. Visual processing mechanisms are routinely characterized by measuring contrast response functions (CRFs). In this report, behavioral CRFs were obtained in humans (both males and females) by measuring afterimage durations over the entire range of inducer stimulus contrasts to reveal visual mechanisms behind attention and consciousness. Deviations relative to the standard CRF, i.e., gain functions, describe the strength of signal enhancement, which were assessed for both changes due to attentional task and conscious perception. It was found that attention displayed a response-gain function, whereas consciousness displayed a contrast-gain function. Through model comparisons, which only included contrast-gain modulations, both contrast-gain and response-gain effects can be explained with a two-level normalization model, in which consciousness affects only the first level and attention affects only the second level. These results demonstrate that attention and consciousness can effectively show different gain functions because they operate through different signal enhancement mechanisms. SIGNIFICANCE STATEMENT The relationship between attention and consciousness is still debated. Mapping contrast response functions (CRFs) has allowed (neuro)scientists to gain important insights into the mechanistic underpinnings of visual processing. Here, the influence of both attention and consciousness on these functions were measured and they displayed a strong dissociation. First, attention lowered CRFs, whereas consciousness raised them. Second, attention manifests itself as a response-gain function, whereas consciousness manifests itself as a contrast-gain function. Extensive model comparisons show that these results are best explained in a two-level normalization model in which consciousness affects only the first level, whereas attention affects only the second level. These findings show dissociations between both the computational mechanisms behind attention and consciousness and the perceptual consequences that they induce. Copyright © 2017 the authors 0270-6474/17/375912-11$15.00/0.

  14. Developing photoreceptor-based models of visual attraction in riverine tsetse, for use in the engineering of more-attractive polyester fabrics for control devices.

    PubMed

    Santer, Roger D

    2017-03-01

    Riverine tsetse transmit the parasites that cause the most prevalent form of human African trypanosomiasis, Gambian HAT. In response to the imperative for cheap and efficient tsetse control, insecticide-treated 'tiny targets' have been developed through refinement of tsetse attractants based on blue fabric panels. However, modern blue polyesters used for this purpose attract many less tsetse than traditional phthalogen blue cottons. Therefore, colour engineering polyesters for improved attractiveness has great potential for tiny target development. Because flies have markedly different photoreceptor spectral sensitivities from humans, and the responses of these photoreceptors provide the inputs to their visually guided behaviours, it is essential that polyester colour engineering be guided by fly photoreceptor-based explanations of tsetse attraction. To this end, tsetse attraction to differently coloured fabrics was recently modelled using the calculated excitations elicited in a generic set of fly photoreceptors as predictors. However, electrophysiological data from tsetse indicate the potential for modified spectral sensitivities versus the generic pattern, and processing of fly photoreceptor responses within segregated achromatic and chromatic channels has long been hypothesised. Thus, I constructed photoreceptor-based models explaining the attraction of G. f. fuscipes to differently coloured tiny targets recorded in a previously published investigation, under differing assumptions about tsetse spectral sensitivities and organisation of visual processing. Models separating photoreceptor responses into achromatic and chromatic channels explained attraction better than earlier models combining weighted photoreceptor responses in a single mechanism, regardless of the spectral sensitivities assumed. However, common principles for fabric colour engineering were evident across the complete set of models examined, and were consistent with earlier work. Tools for the calculation of fly photoreceptor excitations are available with this paper, and the ways in which these and photoreceptor-based models of attraction can provide colorimetric values for the engineering of more-attractively coloured polyester fabrics are discussed.

  15. Developing photoreceptor-based models of visual attraction in riverine tsetse, for use in the engineering of more-attractive polyester fabrics for control devices

    PubMed Central

    2017-01-01

    Riverine tsetse transmit the parasites that cause the most prevalent form of human African trypanosomiasis, Gambian HAT. In response to the imperative for cheap and efficient tsetse control, insecticide-treated ‘tiny targets’ have been developed through refinement of tsetse attractants based on blue fabric panels. However, modern blue polyesters used for this purpose attract many less tsetse than traditional phthalogen blue cottons. Therefore, colour engineering polyesters for improved attractiveness has great potential for tiny target development. Because flies have markedly different photoreceptor spectral sensitivities from humans, and the responses of these photoreceptors provide the inputs to their visually guided behaviours, it is essential that polyester colour engineering be guided by fly photoreceptor-based explanations of tsetse attraction. To this end, tsetse attraction to differently coloured fabrics was recently modelled using the calculated excitations elicited in a generic set of fly photoreceptors as predictors. However, electrophysiological data from tsetse indicate the potential for modified spectral sensitivities versus the generic pattern, and processing of fly photoreceptor responses within segregated achromatic and chromatic channels has long been hypothesised. Thus, I constructed photoreceptor-based models explaining the attraction of G. f. fuscipes to differently coloured tiny targets recorded in a previously published investigation, under differing assumptions about tsetse spectral sensitivities and organisation of visual processing. Models separating photoreceptor responses into achromatic and chromatic channels explained attraction better than earlier models combining weighted photoreceptor responses in a single mechanism, regardless of the spectral sensitivities assumed. However, common principles for fabric colour engineering were evident across the complete set of models examined, and were consistent with earlier work. Tools for the calculation of fly photoreceptor excitations are available with this paper, and the ways in which these and photoreceptor-based models of attraction can provide colorimetric values for the engineering of more-attractively coloured polyester fabrics are discussed. PMID:28306721

  16. Pharmacological and rAAV Gene Therapy Rescue of Visual Functions in a Blind Mouse Model of Leber Congenital Amaurosis

    PubMed Central

    Batten, Matthew L; Imanishi, Yoshikazu; Tu, Daniel C; Doan, Thuy; Zhu, Li; Pang, Jijing; Glushakova, Lyudmila; Moise, Alexander R; Baehr, Wolfgang; Van Gelder, Russell N.; Hauswirth, William W; Rieke, Fred; Palczewski, Krzysztof

    2005-01-01

    Background Leber congenital amaurosis (LCA), a heterogeneous early-onset retinal dystrophy, accounts for ~15% of inherited congenital blindness. One cause of LCA is loss of the enzyme lecithin:retinol acyl transferase (LRAT), which is required for regeneration of the visual photopigment in the retina. Methods and Findings An animal model of LCA, the Lrat −/− mouse, recapitulates clinical features of the human disease. Here, we report that two interventions—intraocular gene therapy and oral pharmacologic treatment with novel retinoid compounds—each restore retinal function to Lrat −/− mice. Gene therapy using intraocular injection of recombinant adeno-associated virus carrying the Lrat gene successfully restored electroretinographic responses to ~50% of wild-type levels (p < 0.05 versus wild-type and knockout controls), and pupillary light responses (PLRs) of Lrat −/− mice increased ~2.5 log units (p < 0.05). Pharmacological intervention with orally administered pro-drugs 9-cis-retinyl acetate and 9-cis-retinyl succinate (which chemically bypass the LRAT-catalyzed step in chromophore regeneration) also caused long-lasting restoration of retinal function in LRAT-deficient mice and increased ERG response from ~5% of wild-type levels in Lrat −/− mice to ~50% of wild-type levels in treated Lrat −/− mice (p < 0.05 versus wild-type and knockout controls). The interventions produced markedly increased levels of visual pigment from undetectable levels to 600 pmoles per eye in retinoid treated mice, and ~1,000-fold improvements in PLR and electroretinogram sensitivity. The techniques were complementary when combined. Conclusion Intraocular gene therapy and pharmacologic bypass provide highly effective and complementary means for restoring retinal function in this animal model of human hereditary blindness. These complementary methods offer hope of developing treatment to restore vision in humans with certain forms of hereditary congenital blindness. PMID:16250670

  17. Mapping the structure of perceptual and visual-motor abilities in healthy young adults.

    PubMed

    Wang, Lingling; Krasich, Kristina; Bel-Bahar, Tarik; Hughes, Lauren; Mitroff, Stephen R; Appelbaum, L Gregory

    2015-05-01

    The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Developing and evaluating a target-background similarity metric for camouflage detection.

    PubMed

    Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong

    2014-01-01

    Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.

  19. Exploring responses to art in adolescence: a behavioral and eye-tracking study.

    PubMed

    Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2014-01-01

    Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception.

  20. Exploring Responses to Art in Adolescence: A Behavioral and Eye-Tracking Study

    PubMed Central

    Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2014-01-01

    Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception. PMID:25048813

  1. Human visual response to nuclear particle exposures

    NASA Technical Reports Server (NTRS)

    Tobias, C. A.; Budinger, T. F.; Lyman, J. T.

    1972-01-01

    Experiments with accelerated helium ions were performed in an effort to localize the site of initial radiation interactions in the eye that lead to light flash observations by astronauts during spaceflight. The character and efficiency of helium ion induction of visual sensations depended on the state of dark adaptation of the retina; also, the same events were seen with different efficiencies and details when particle flux density changed. It was concluded that fast particles cause interactions in the retina, particularly in the receptor layer, and thus give rise to the sensations of light flashes, streaks, and supernovae.

  2. V1 projection zone signals in human macular degeneration depend on task, not stimulus.

    PubMed

    Masuda, Yoichiro; Dumoulin, Serge O; Nakadomari, Satoshi; Wandell, Brian A

    2008-11-01

    We used functional magnetic resonance imaging to assess abnormal cortical signals in humans with juvenile macular degeneration (JMD). These signals have been interpreted as indicating large-scale cortical reorganization. Subjects viewed a stimulus passively or performed a task; the task was either related or unrelated to the stimulus. During passive viewing, or while performing tasks unrelated to the stimulus, there were large unresponsive V1 regions. These regions included the foveal projection zone, and we refer to them as the lesion projection zone (LPZ). In 3 JMD subjects, we observed highly significant responses in the LPZ while they performed stimulus-related judgments. In control subjects, where we presented the stimulus only within the peripheral visual field, there was no V1 response in the foveal projection zone in any condition. The difference between JMD and control responses can be explained by hypotheses that have very different implications for V1 reorganization. In controls retinal afferents carry signals indicating the presence of a uniform (zero-contrast) region of the visual field. Deletion of retinal input may 1) spur the formation of new cortical pathways that carry task-dependent signals (reorganization), or 2) unmask preexisting task-dependent cortical signals that ordinarily are suppressed by the deleted signals (no reorganization).

  3. V1 Projection Zone Signals in Human Macular Degeneration Depend on Task, not Stimulus

    PubMed Central

    Dumoulin, Serge O.; Nakadomari, Satoshi; Wandell, Brian A.

    2008-01-01

    We used functional magnetic resonance imaging to assess abnormal cortical signals in humans with juvenile macular degeneration (JMD). These signals have been interpreted as indicating large-scale cortical reorganization. Subjects viewed a stimulus passively or performed a task; the task was either related or unrelated to the stimulus. During passive viewing, or while performing tasks unrelated to the stimulus, there were large unresponsive V1 regions. These regions included the foveal projection zone, and we refer to them as the lesion projection zone (LPZ). In 3 JMD subjects, we observed highly significant responses in the LPZ while they performed stimulus-related judgments. In control subjects, where we presented the stimulus only within the peripheral visual field, there was no V1 response in the foveal projection zone in any condition. The difference between JMD and control responses can be explained by hypotheses that have very different implications for V1 reorganization. In controls retinal afferents carry signals indicating the presence of a uniform (zero-contrast) region of the visual field. Deletion of retinal input may 1) spur the formation of new cortical pathways that carry task-dependent signals (reorganization), or 2) unmask preexisting task-dependent cortical signals that ordinarily are suppressed by the deleted signals (no reorganization). PMID:18250083

  4. Independence between implicit and explicit processing as revealed by the Simon effect.

    PubMed

    Lo, Shih-Yu; Yeh, Su-Ling

    2011-09-01

    Studies showing human behavior influenced by subliminal stimuli mainly focus on implicit processing per se, and little is known about its interaction with explicit processing. We examined this by using the Simon effect, wherein a task-irrelevant spatial distracter interferes with lateralized response. Lo and Yeh (2008) found that the visual Simon effect, although it occurred when participants were aware of the visual distracters, did not occur with subliminal visual distracters. We used the same paradigm and examined whether subliminal and supra-threshold stimuli are processed independently by adding a supra-threshold auditory distracter to ascertain whether it would interact with the subliminal visual distracter. Results showed auditory Simon effect, but there was still no visual Simon effect, indicating that supra-threshold and subliminal stimuli are processed separately in independent streams. In contrast to the traditional view that implicit processing precedes explicit processing, our results suggest that they operate independently in a parallel fashion. Copyright © 2010 Elsevier Inc. All rights reserved.

  5. Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex

    PubMed Central

    2017-01-01

    Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794

  6. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  7. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  8. Linear and Non-Linear Visual Feature Learning in Rat and Humans

    PubMed Central

    Bossens, Christophe; Op de Beeck, Hans P.

    2016-01-01

    The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201

  9. Transient visual pathway critical for normal development of primate grasping behavior.

    PubMed

    Mundinano, Inaki-Carril; Fox, Dylan M; Kwan, William C; Vidaurre, Diego; Teo, Leon; Homman-Ludiye, Jihane; Goodale, Melvyn A; Leopold, David A; Bourne, James A

    2018-02-06

    An evolutionary hallmark of anthropoid primates, including humans, is the use of vision to guide precise manual movements. These behaviors are reliant on a specialized visual input to the posterior parietal cortex. Here, we show that normal primate reaching-and-grasping behavior depends critically on a visual pathway through the thalamic pulvinar, which is thought to relay information to the middle temporal (MT) area during early life and then swiftly withdraws. Small MRI-guided lesions to a subdivision of the inferior pulvinar subnucleus (PIm) in the infant marmoset monkey led to permanent deficits in reaching-and-grasping behavior in the adult. This functional loss coincided with the abnormal anatomical development of multiple cortical areas responsible for the guidance of actions. Our study reveals that the transient retino-pulvinar-MT pathway underpins the development of visually guided manual behaviors in primates that are crucial for interacting with complex features in the environment.

  10. Visual motion integration for perception and pursuit

    NASA Technical Reports Server (NTRS)

    Stone, L. S.; Beutter, B. R.; Lorenceau, J.

    2000-01-01

    To examine the relationship between visual motion processing for perception and pursuit, we measured the pursuit eye-movement and perceptual responses to the same complex-motion stimuli. We show that humans can both perceive and pursue the motion of line-figure objects, even when partial occlusion makes the resulting image motion vastly different from the underlying object motion. Our results show that both perception and pursuit can perform largely accurate motion integration, i.e. the selective combination of local motion signals across the visual field to derive global object motion. Furthermore, because we manipulated perceived motion while keeping image motion identical, the observed parallel changes in perception and pursuit show that the motion signals driving steady-state pursuit and perception are linked. These findings disprove current pursuit models whose control strategy is to minimize retinal image motion, and suggest a new framework for the interplay between visual cortex and cerebellum in visuomotor control.

  11. Pathological alterations typical of human Tay-Sachs disease, in the retina of a deep-sea fish.

    PubMed

    Fishelson, L; Delarea, Y; Galil, B S

    2000-08-01

    Micrographs of retinas from the deep-sea fish Cataetyx laticeps revealed visual cells containing membranous whorls in the ellipsoids of the inner segments resulting from stretching and modifications of the mitochondria membranes and their cristae. These pathological structures seem to be homologous to the whorls observed in retinas of human carriers of Tay-Sachs disease. This disease, a genetic disorder, is found in humans and some mammals. Our findings in fish suggest that the gene responsible can be found throughout the vertebrate evolutionary tree, possibly dormant in most taxa.

  12. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-01-01

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108

  13. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence.

    PubMed

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-06-10

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.

  14. Visual information without thermal energy may induce thermoregulatory-like cardiovascular responses

    PubMed Central

    2013-01-01

    Background Human core body temperature is kept quasi-constant regardless of varying thermal environments. It is well known that physiological thermoregulatory systems are under the control of central and peripheral sensory organs that are sensitive to thermal energy. If these systems wrongly respond to non-thermal stimuli, it may disturb human homeostasis. Methods Fifteen participants viewed video images evoking hot or cold impressions in a thermally constant environment. Cardiovascular indices were recorded during the experiments. Correlations between the ‘hot-cold’ impression scores and cardiovascular indices were calculated. Results The changes of heart rate, cardiac output, and total peripheral resistance were significantly correlated with the ‘hot-cold’ impression scores, and the tendencies were similar to those in actual thermal environments corresponding to the impressions. Conclusions The present results suggest that visual information without any thermal energy can affect physiological thermoregulatory systems at least superficially. To avoid such ‘virtual’ environments disturbing human homeostasis, further study and more attention are needed. PMID:24373765

  15. Audiovisual integration facilitates monkeys' short-term memory.

    PubMed

    Bigelow, James; Poremba, Amy

    2016-07-01

    Many human behaviors are known to benefit from audiovisual integration, including language and communication, recognizing individuals, social decision making, and memory. Exceptionally little is known about the contributions of audiovisual integration to behavior in other primates. The current experiment investigated whether short-term memory in nonhuman primates is facilitated by the audiovisual presentation format. Three macaque monkeys that had previously learned an auditory delayed matching-to-sample (DMS) task were trained to perform a similar visual task, after which they were tested with a concurrent audiovisual DMS task with equal proportions of auditory, visual, and audiovisual trials. Parallel to outcomes in human studies, accuracy was higher and response times were faster on audiovisual trials than either unisensory trial type. Unexpectedly, two subjects exhibited superior unimodal performance on auditory trials, a finding that contrasts with previous studies, but likely reflects their training history. Our results provide the first demonstration of a bimodal memory advantage in nonhuman primates, lending further validation to their use as a model for understanding audiovisual integration and memory processing in humans.

  16. Visual optics: an engineering approach

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2010-11-01

    The human eyes' visual system interprets the information from the visible light in order to build a representation of the world surrounding the body. It derives color by comparing the responses to light from the three types of photoreceptor cones in the eyes. These long medium and short cones are sensitive to blue, green and red portions of the visible spectrum. We simulate the color vision for the normal eyes. We see the effects of the dyes, filters, glasses and windows on color perception when the test image is illuminated with the D65 light sources. In addition to colors' perception, the human eyes can suffer from diseases and disorders. The eye can be seen as an optical instrument which has its own eye print. We present aspects of some nowadays methods and technologies which can capture and correct the human eyes' wavefront aberrations. We focus our attention to Siedel aberrations formula, Zenike polynomials, Shack-Hartmann Sensor, LASIK, interferograms fringes aberrations and Talbot effect.

  17. Physics and psychophysics of color reproduction

    NASA Astrophysics Data System (ADS)

    Giorgianni, Edward J.

    1991-08-01

    The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.

  18. Primary structure and functional characterization of a Drosophila dopamine receptor with high homology to human D1/5 receptors.

    PubMed

    Gotzes, F; Balfanz, S; Baumann, A

    1994-01-01

    Members of the superfamily of G-protein coupled receptors share significant similarities in sequence and transmembrane architecture. We have isolated a Drosophila homologue of the mammalian dopamine receptor family using a low stringency hybridization approach. The deduced amino acid sequence is approximately 70% homologous to the human D1/D5 receptors. When expressed in HEK 293 cells, the Drosophila receptor stimulates cAMP production in response to dopamine application. This effect was mimicked by SKF 38393, a specific D1 receptor agonist, but inhibited by dopaminergic antagonists such as butaclamol and flupentixol. In situ hybridization revealed that the Drosophila dopamine receptor is highly expressed in the somata of the optic lobes. This suggests that the receptor might be involved in the processing of visual information and/or visual learning in invertebrates.

  19. Neural Responses to Central and Peripheral Objects in the Lateral Occipital Cortex

    PubMed Central

    Wang, Bin; Guo, Jiayue; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Huang, Qiang; Wu, Jinglong

    2016-01-01

    Human object recognition and classification depend on the retinal location where the object is presented and decrease as eccentricity increases. The lateral occipital complex (LOC) is thought to be preferentially involved in the processing of objects, and its neural responses exhibit category biases to objects presented in the central visual field. However, the nature of LOC neural responses to central and peripheral objects remains largely unclear. In the present study, we used functional magnetic resonance imaging (fMRI) and a wide-view presentation system to investigate neural responses to four categories of objects (faces, houses, animals, and cars) in the primary visual cortex (V1) and the lateral visual cortex, including the LOC and the retinotopic areas LO-1 and LO-2. In these regions, the neural responses to objects decreased as the distance between the location of presentation and center fixation increased, which is consistent with the diminished perceptual ability that was found for peripherally presented images. The LOC and LO-2 exhibited significantly positive neural responses to all eccentricities (0–55°), but LO-1 exhibited significantly positive responses only to central eccentricities (0–22°). By measuring the ratio relative to V1 (RRV1), we further demonstrated that eccentricity, category and the interaction between them significantly affected neural processing in these regions. LOC, LO-1, and LO-2 exhibited larger RRV1s when stimuli were presented at an eccentricity of 0° compared to when they were presented at the greater eccentricities. In LOC and LO-2, the RRV1s for images of faces, animals and cars showed an increasing trend when the images were presented at eccentricities of 11 to 33°. However, the RRV1s for houses showed a decreasing trend in LO-1 and no difference in the LOC and LO-2. We hypothesize, that when houses and the images in the other categories were presented in the peripheral visual field, they were processed via different strategies in the lateral visual cortex. PMID:26924972

  20. Attention to Multiple Objects Facilitates Their Integration in Prefrontal and Parietal Cortex.

    PubMed

    Kim, Yee-Joon; Tsai, Jeffrey J; Ojemann, Jeffrey; Verghese, Preeti

    2017-05-10

    Selective attention is known to interact with perceptual organization. In visual scenes, individual objects that are distinct and discriminable may occur on their own, or in groups such as a stack of books. The main objective of this study is to probe the neural interaction that occurs between individual objects when attention is directed toward one or more objects. Here we record steady-state visual evoked potentials via electrocorticography to directly assess the responses to individual stimuli and to their interaction. When human participants attend to two adjacent stimuli, prefrontal and parietal cortex shows a selective enhancement of only the neural interaction between stimuli, but not the responses to individual stimuli. When only one stimulus is attended, the neural response to that stimulus is selectively enhanced in prefrontal and parietal cortex. In contrast, early visual areas generally manifest responses to individual stimuli and to their interaction regardless of attentional task, although a subset of the responses is modulated similarly to prefrontal and parietal cortex. Thus, the neural representation of the visual scene as one progresses up the cortical hierarchy becomes more highly task-specific and represents either individual stimuli or their interaction, depending on the behavioral goal. Attention to multiple objects facilitates an integration of objects akin to perceptual grouping. SIGNIFICANCE STATEMENT Individual objects in a visual scene are seen as distinct entities or as parts of a whole. Here we examine how attention to multiple objects affects their neural representation. Previous studies measured single-cell or fMRI responses and obtained only aggregate measures that combined the activity to individual stimuli as well as their potential interaction. Here, we directly measure electrocorticographic steady-state responses corresponding to individual objects and to their interaction using a frequency-tagging technique. Attention to two stimuli increases the interaction component that is a hallmark for perceptual integration of stimuli. Furthermore, this stimulus-specific interaction is represented in prefrontal and parietal cortex in a task-dependent manner. Copyright © 2017 the authors 0270-6474/17/374942-12$15.00/0.

Top