Sample records for temporal visual field

  1. Temporal visual field defects are associated with monocular inattention in chiasmal pathology.

    PubMed

    Fledelius, Hans C

    2009-11-01

    Chiasmal lesions have been shown to give rise occasionally to uni-ocular temporal inattention, which cannot be compensated for by volitional eye movement. This article describes the assessments of 46 such patients with chiasmal pathology. It aims to determine the clinical spectrum of this disorder, including interference with reading. Retrospective consecutive observational clinical case study over a 7-year period comprising 46 patients with chiasmal field loss of varying degrees. Observation of reading behaviour during monocular visual acuity testing ascertained from consecutive patients who appeared unable to read optotypes on the temporal side of the chart. Visual fields were evaluated by kinetic (Goldmann) and static (Octopus) techniques. Five patients who clearly manifested this condition are presented in more detail. The results of visual field testing were related to absence or presence of uni-ocular visual inattentive behaviour for distance visual acuity testing and/or reading printed text. Despite normal eye movements, the 46 patients making up the clinical series perceived only optotypes in the nasal part of the chart, in one eye or in both, when tested for each eye in turn. The temporal optotypes were ignored, and this behaviour persisted despite instruction to search for any additional letters temporal to those, which had been seen. This phenomenon of unilateral visual inattention held for both eyes in 18 and was unilateral in the remaining 28 patients. Partial or full reversibility after treatment was recorded in 21 of the 39 for whom reliable follow-up data were available. Reading a text was affected in 24 individuals, and permanently so in six. A neglect-like spatial unawareness and a lack of cognitive compensation for varying degrees of temporal visual field loss were present in all the patients observed. Not only is visual field loss a feature of chiasmal pathology, but the higher visual function of affording attention within the temporal visual field by means of using conscious thought to invoke appropriate compensatory eye movement was also absent. This suggests the possibility of 'trans-synaptic dysfunction' caused by loss of visual input to higher visual centres. When inattention to the temporal side is manifest on monocular visual testing it should raise the suspicion of chiasmal pathology.

  2. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  3. A matter of time: improvement of visual temporal processing during training-induced restoration of light detection performance

    PubMed Central

    Poggel, Dorothe A.; Treutwein, Bernhard; Sabel, Bernhard A.; Strasburger, Hans

    2015-01-01

    The issue of how basic sensory and temporal processing are related is still unresolved. We studied temporal processing, as assessed by simple visual reaction times (RT) and double-pulse resolution (DPR), in patients with partial vision loss after visual pathway lesions and investigated whether vision restoration training (VRT), a training program designed to improve light detection performance, would also affect temporal processing. Perimetric and campimetric visual field tests as well as maps of DPR thresholds and RT were acquired before and after a 3 months training period with VRT. Patient performance was compared to that of age-matched healthy subjects. Intact visual field size increased during training. Averaged across the entire visual field, DPR remained constant while RT improved slightly. However, in transition zones between the blind and intact areas (areas of residual vision) where patients had shown between 20 and 80% of stimulus detection probability in pre-training visual field tests, both DPR and RT improved markedly. The magnitude of improvement depended on the defect depth (or degree of intactness) of the respective region at baseline. Inter-individual training outcome variability was very high, with some patients showing little change and others showing performance approaching that of healthy controls. Training-induced improvement of light detection in patients with visual field loss thus generalized to dynamic visual functions. The findings suggest that similar neural mechanisms may underlie the impairment and subsequent training-induced functional recovery of both light detection and temporal processing. PMID:25717307

  4. Change of temporal-order judgment of sounds during long-lasting exposure to large-field visual motion.

    PubMed

    Teramoto, Wataru; Watanabe, Hiroshi; Umemura, Hiroyuki

    2008-01-01

    The perceived temporal order of external successive events does not always follow their physical temporal order. We examined the contribution of self-motion mechanisms in the perception of temporal order in the auditory modality. We measured perceptual biases in the judgment of the temporal order of two short sounds presented successively, while participants experienced visually induced self-motion (yaw-axis circular vection) elicited by viewing long-lasting large-field visual motion. In experiment 1, a pair of white-noise patterns was presented to participants at various stimulus-onset asynchronies through headphones, while they experienced visually induced self-motion. Perceived temporal order of auditory events was modulated by the direction of the visual motion (or self-motion). Specifically, the sound presented to the ear in the direction opposite to the visual motion (ie heading direction) was perceived prior to the sound presented to the ear in the same direction. Experiments 2A and 2B were designed to reduce the contributions of decisional and/or response processes. In experiment 2A, the directional cueing of the background (left or right) and the response dimension (high pitch or low pitch) were not spatially associated. In experiment 2B, participants were additionally asked to report which of the two sounds was perceived 'second'. Almost the same results as in experiment 1 were observed, suggesting that the change in temporal order of auditory events during large-field visual motion reflects a change in perceptual processing. Experiment 3 showed that the biases in the temporal-order judgments of auditory events were caused by concurrent actual self-motion with a rotatory chair. In experiment 4, using a small display, we showed that 'pure' long exposure to visual motion without the sensation of self-motion was not responsible for this phenomenon. These results are consistent with previous studies reporting a change in the perceived temporal order of visual or tactile events depending on the direction of self-motion. Hence, large-field induced (ie optic flow) self-motion can affect the temporal order of successive external events across various modalities.

  5. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part II: cognitive factors shaping visual field maps.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Part I described the topography of visual performance over the life span. Performance decline was explained only partly by deterioration of the optical apparatus. Part II therefore examines the influence of higher visual and cognitive functions. Visual field maps for 95 healthy observers of static perimetry, double-pulse resolution (DPR), reaction times, and contrast thresholds, were correlated with measures of visual attention (alertness, divided attention, spatial cueing), visual search, and the size of the attention focus. Correlations with the attentional variables were substantial, particularly for variables of temporal processing. DPR thresholds depended on the size of the attention focus. The extraction of cognitive variables from the correlations between topographical variables and participant age substantially reduced those correlations. There is a systematic top-down influence on the aging of visual functions, particularly of temporal variables, that largely explains performance decline and the change of the topography over the life span.

  6. The Tölz Temporal Topography Study: mapping the visual field across the life span. Part I: the topography of light detection and temporal-information processing.

    PubMed

    Poggel, Dorothe A; Treutwein, Bernhard; Calmanti, Claudia; Strasburger, Hans

    2012-08-01

    Temporal performance parameters vary across the visual field. Their topographical distributions relative to each other and relative to basic visual performance measures and their relative change over the life span are unknown. Our goal was to characterize the topography and age-related change of temporal performance. We acquired visual field maps in 95 healthy participants (age: 10-90 years): perimetric thresholds, double-pulse resolution (DPR), reaction times (RTs), and letter contrast thresholds. DPR and perimetric thresholds increased with eccentricity and age; the periphery showed a more pronounced age-related increase than the center. RT increased only slightly and uniformly with eccentricity. It remained almost constant up to the age of 60, a marked change occurring only above 80. Overall, age was a poor predictor of functionality. Performance decline could be explained only in part by the aging of the retina and optic media. In Part II, we therefore examine higher visual and cognitive functions.

  7. A computational theory of visual receptive fields.

    PubMed

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.

  8. Spatial limitations of fast temporal segmentation are best modeled by V1 receptive fields.

    PubMed

    Goodbourn, Patrick T; Forte, Jason D

    2013-11-22

    The fine temporal structure of events influences the spatial grouping and segmentation of visual-scene elements. Although adjacent regions flickering asynchronously at high temporal frequencies appear identical, the visual system signals a boundary between them. These "phantom contours" disappear when the gap between regions exceeds a critical value (g(max)). We used g(max) as an index of neuronal receptive-field size to compare with known receptive-field data from along the visual pathway and thus infer the location of the mechanism responsible for fast temporal segmentation. Observers viewed a circular stimulus reversing in luminance contrast at 20 Hz for 500 ms. A gap of constant retinal eccentricity segmented each stimulus quadrant; on each trial, participants identified a target quadrant containing counterphasing inner and outer segments. Through varying the gap width, g(max) was determined at a range of retinal eccentricities. We found that g(max) increased from 0.3° to 0.8° for eccentricities from 2° to 12°. These values correspond to receptive-field diameters of neurons in primary visual cortex that have been reported in single-cell and fMRI studies and are consistent with the spatial limitations of motion detection. In a further experiment, we found that modulation sensitivity depended critically on the length of the contour and could be predicted by a simple model of spatial summation in early cortical neurons. The results suggest that temporal segmentation is achieved by neurons at the earliest cortical stages of visual processing, most likely in primary visual cortex.

  9. A Spatial and Temporal Frequency Based Figure-Ground Processor

    NASA Astrophysics Data System (ADS)

    Weisstein, Namoi; Wong, Eva

    1990-03-01

    Recent findings in visual psychophysics have shown that figure-ground perception can be specified by the spatial and temporal response characteristics of the visual system. Higher spatial frequency regions of the visual field are perceived as figure and lower spatial frequency regions are perceived as background/ (Klymenko and Weisstein, 1986, Wong and Weisstein, 1989). Higher temporal frequency regions are seen as background and lower temporal frequency regions are seen as figure (Wong and Weisstein, 1987, Klymenko, Weisstein, Topolski, and Hsieh, 1988). Thus, high spatial and low temporal frequencies appear to be associated with figure and low spatial and high temporal frequencies appear to be associated with background.

  10. Charles Bonnet syndrome in hemianopia, following antero-mesial temporal lobectomy for drug-resistant epilepsy.

    PubMed

    Contardi, Sara; Rubboli, Guido; Giulioni, Marco; Michelucci, Roberto; Pizza, Fabio; Gardella, Elena; Pinardi, Federica; Bartolomei, Ilaria; Tassinari, Carlo Alberto

    2007-09-01

    Charles Bonnet syndrome (CBS) is a disorder characterized by the occurrence of complex visual hallucinations in patients with acquired impairment of vision and without psychiatric disorders. In spite of the high incidence of visual field defects following antero-mesial temporal lobectomy for refractory temporal lobe epilepsy, reports of CBS in patients who underwent this surgical procedure are surprisingly rare. We describe a patient operated on for drug-resistant epilepsy. As a result of left antero-mesial temporal resection, she presented right homonymous hemianopia. A few days after surgery, she started complaining of visual hallucinations, such as static or moving "Lilliputian" human figures, or countryside scenes, restricted to the hemianopic field. The patient was fully aware of their fictitious nature. These disturbances disappeared progressively over a few weeks. The incidence of CBS associated with visual field defects following epilepsy surgery might be underestimated. Patients with post-surgical CBS should be reassured that it is not an epileptic phenomenon, and that it has a benign, self-limiting, course which does not usually require treatment.

  11. Visual Field Map Clusters in High-Order Visual Processing: Organization of V3A/V3B and a New Cloverleaf Cluster in the Posterior Superior Temporal Sulcus

    PubMed Central

    Barton, Brian; Brewer, Alyssa A.

    2017-01-01

    The cortical hierarchy of the human visual system has been shown to be organized around retinal spatial coordinates throughout much of low- and mid-level visual processing. These regions contain visual field maps (VFMs) that each follows the organization of the retina, with neighboring aspects of the visual field processed in neighboring cortical locations. On a larger, macrostructural scale, groups of such sensory cortical field maps (CFMs) in both the visual and auditory systems are organized into roughly circular cloverleaf clusters. CFMs within clusters tend to share properties such as receptive field distribution, cortical magnification, and processing specialization. Here we use fMRI and population receptive field (pRF) modeling to investigate the extent of VFM and cluster organization with an examination of higher-level visual processing in temporal cortex and compare these measurements to mid-level visual processing in dorsal occipital cortex. In human temporal cortex, the posterior superior temporal sulcus (pSTS) has been implicated in various neuroimaging studies as subserving higher-order vision, including face processing, biological motion perception, and multimodal audiovisual integration. In human dorsal occipital cortex, the transverse occipital sulcus (TOS) contains the V3A/B cluster, which comprises two VFMs subserving mid-level motion perception and visuospatial attention. For the first time, we present the organization of VFMs in pSTS in a cloverleaf cluster. This pSTS cluster contains four VFMs bilaterally: pSTS-1:4. We characterize these pSTS VFMs as relatively small at ∼125 mm2 with relatively large pRF sizes of ∼2–8° of visual angle across the central 10° of the visual field. V3A and V3B are ∼230 mm2 in surface area, with pRF sizes here similarly ∼1–8° of visual angle across the same region. In addition, cortical magnification measurements show that a larger extent of the pSTS VFM surface areas are devoted to the peripheral visual field than those in the V3A/B cluster. Reliability measurements of VFMs in pSTS and V3A/B reveal that these cloverleaf clusters are remarkably consistent and functionally differentiable. Our findings add to the growing number of measurements of widespread sensory CFMs organized into cloverleaf clusters, indicating that CFMs and cloverleaf clusters may both be fundamental organizing principles in cortical sensory processing. PMID:28293182

  12. Peripheral resolution and contrast sensitivity: Effects of stimulus drift.

    PubMed

    Venkataraman, Abinaya Priya; Lewis, Peter; Unsbo, Peter; Lundström, Linda

    2017-04-01

    Optimal temporal modulation of the stimulus can improve foveal contrast sensitivity. This study evaluates the characteristics of the peripheral spatiotemporal contrast sensitivity function in normal-sighted subjects. The purpose is to identify a temporal modulation that can potentially improve the remaining peripheral visual function in subjects with central visual field loss. High contrast resolution cut-off for grating stimuli with four temporal frequencies (0, 5, 10 and 15Hz drift) was first evaluated in the 10° nasal visual field. Resolution contrast sensitivity for all temporal frequencies was then measured at four spatial frequencies between 0.5 cycles per degree (cpd) and the measured stationary cut-off. All measurements were performed with eccentric optical correction. Similar to foveal vision, peripheral contrast sensitivity is highest for a combination of low spatial frequency and 5-10Hz drift. At higher spatial frequencies, there was a decrease in contrast sensitivity with 15Hz drift. Despite this decrease, the resolution cut-off did not vary largely between the different temporal frequencies tested. Additional measurements of contrast sensitivity at 0.5 cpd and resolution cut-off for stationary (0Hz) and 7.5Hz stimuli performed at 10, 15, 20 and 25° in the nasal visual field also showed the same characteristics across eccentricities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Extent of resection in temporal lobectomy for epilepsy. II. Memory changes and neurologic complications.

    PubMed

    Katz, A; Awad, I A; Kong, A K; Chelune, G J; Naugle, R I; Wyllie, E; Beauchamp, G; Lüders, H

    1989-01-01

    We present correlations of extent of temporal lobectomy for intractable epilepsy with postoperative memory changes (20 cases) and abnormalities of visual field and neurologic examination (45 cases). Postoperative magnetic resonance imaging (MRI) in the coronal plane was used to quantify anteroposterior extent of resection of various quadrants of the temporal lobe, using a 20-compartment model of that structure. The Wechsler Memory Scale-Revised (WMS-R) was administered preoperatively and postoperatively. Postoperative decrease in percentage of retention of verbal material correlated with extent of medial resection of left temporal lobe, whereas decrease in percentage of retention of visual material correlated with extent of medial resection of right temporal lobe. These correlations approached but did not reach statistical significance. Extent of resection correlated significantly with the presence of visual field defect on perimetry testing but not with severity, denseness, or congruity of the defect. There was no correlation between postoperative dysphasia and extent of resection in any quadrant. Assessment of extent of resection after temporal lobectomy allows a rational interpretation of postoperative neurologic deficits in light of functional anatomy of the temporal lobe.

  14. Multifocal Visual Evoked Potential in Eyes With Temporal Hemianopia From Chiasmal Compression: Correlation With Standard Automated Perimetry and OCT Findings.

    PubMed

    Sousa, Rafael M; Oyamada, Maria K; Cunha, Leonardo P; Monteiro, Mário L R

    2017-09-01

    To verify whether multifocal visual evoked potential (mfVEP) can differentiate eyes with temporal hemianopia due to chiasmal compression from healthy controls. To assess the relationship between mfVEP, standard automated perimetry (SAP), and Fourier domain-optical coherence tomography (FD-OCT) macular and peripapillary retinal nerve fiber layer (RNFL) thickness measurements. Twenty-seven eyes with permanent temporal visual field (VF) defects from chiasmal compression on SAP and 43 eyes of healthy controls were submitted to mfVEP and FD-OCT scanning. Multifocal visual evoked potential was elicited using a stimulus pattern of 60 sectors and the responses were averaged for the four quadrants and two hemifields. Optical coherence tomography macular measurements were averaged in quadrants and halves, while peripapillary RNFL thickness was averaged in four sectors around the disc. Visual field loss was estimated in four quadrants and each half of the 24-2 strategy test points. Multifocal visual evoked potential measurements in the two groups were compared using generalized estimated equations, and the correlations between mfVEP, VF, and OCT findings were quantified. Multifocal visual evoked potential-measured temporal P1 and N2 amplitudes were significantly smaller in patients than in controls. No significant difference in amplitude was observed for nasal parameters. A significant correlation was found between mfVEP amplitudes and temporal VF loss, and between mfVEP amplitudes and the corresponding OCT-measured macular and RNFL thickness parameters. Multifocal visual evoked potential amplitude parameters were able to differentiate eyes with temporal hemianopia from controls and were significantly correlated with VF and OCT findings, suggesting mfVEP is a useful tool for the detection of visual abnormalities in patients with chiasmal compression.

  15. Sunglasses with thick temples and frame constrict temporal visual field.

    PubMed

    Denion, Eric; Dugué, Audrey Emmanuelle; Augy, Sylvain; Coffin-Pichonnet, Sophie; Mouriaux, Frédéric

    2013-12-01

    Our aim was to compare the impact of two types of sunglasses on visual field and glare: one ("thick sunglasses") with a thick plastic frame and wide temples and one ("thin sunglasses") with a thin metal frame and thin temples. Using the Goldmann perimeter, visual field surface areas (cm²) were calculated as projections on a 30-cm virtual cupola. A V4 test object was used, from seen to unseen, in 15 healthy volunteers in the primary position of gaze ("base visual field"), then allowing eye motion ("eye motion visual field") without glasses, then with "thin sunglasses," followed by "thick sunglasses." Visual field surface area differences greater than the 14% reproducibility error of the method and having a p < 0.05 were considered significant. A glare test was done using a surgical lighting system pointed at the eye(s) at different incidence angles. No significant "base visual field" or "eye motion visual field" surface area variations were noted when comparing tests done without glasses and with the "thin sunglasses." In contrast, a 22% "eye motion visual field" surface area decrease (p < 0.001) was noted when comparing tests done without glasses and with "thick sunglasses." This decrease was most severe in the temporal quadrant (-33%; p < 0.001). All subjects reported less lateral glare with the "thick sunglasses" than with the "thin sunglasses" (p < 0.001). The better protection from lateral glare offered by "thick sunglasses" is offset by the much poorer ability to use lateral space exploration; this results in a loss of most, if not all, of the additional visual field gained through eye motion.

  16. Temporal stability of visually selective responses in intracranial field potentials recorded from human occipital and temporal lobes

    PubMed Central

    Bansal, Arjun K.; Singer, Jedediah M.; Anderson, William S.; Golby, Alexandra; Madsen, Joseph R.

    2012-01-01

    The cerebral cortex needs to maintain information for long time periods while at the same time being capable of learning and adapting to changes. The degree of stability of physiological signals in the human brain in response to external stimuli over temporal scales spanning hours to days remains unclear. Here, we quantitatively assessed the stability across sessions of visually selective intracranial field potentials (IFPs) elicited by brief flashes of visual stimuli presented to 27 subjects. The interval between sessions ranged from hours to multiple days. We considered electrodes that showed robust visual selectivity to different shapes; these electrodes were typically located in the inferior occipital gyrus, the inferior temporal cortex, and the fusiform gyrus. We found that IFP responses showed a strong degree of stability across sessions. This stability was evident in averaged responses as well as single-trial decoding analyses, at the image exemplar level as well as at the category level, across different parts of visual cortex, and for three different visual recognition tasks. These results establish a quantitative evaluation of the degree of stationarity of visually selective IFP responses within and across sessions and provide a baseline for studies of cortical plasticity and for the development of brain-machine interfaces. PMID:22956795

  17. Relationship between slow visual processing and reading speed in people with macular degeneration

    PubMed Central

    Cheong, Allen MY; Legge, Gordon E; Lawrence, Mary G; Cheung, Sing-Hang; Ruff, Mary A

    2007-01-01

    Purpose People with macular degeneration (MD) often read slowly even with adequate magnification to compensate for acuity loss. Oculomotor deficits may affect reading in MD, but cannot fully explain the substantial reduction in reading speed. Central-field loss (CFL) is often a consequence of macular degeneration, necessitating the use of peripheral vision for reading. We hypothesized that slower temporal processing of visual patterns in peripheral vision is a factor contributing to slow reading performance in MD patients. Methods Fifteen subjects with MD, including 12 with CFL, and five age-matched control subjects were recruited. Maximum reading speed and critical print size were measured with RSVP (Rapid Serial Visual Presentation). Temporal processing speed was studied by measuring letter-recognition accuracy for strings of three randomly selected letters centered at fixation for a range of exposure times. Temporal threshold was defined as the exposure time yielding 80% recognition accuracy for the central letter. Results Temporal thresholds for the MD subjects ranged from 159 to 5881 ms, much longer than values for age-matched controls in central vision (13 ms, p<0.01). The mean temporal threshold for the 11 MD subjects who used eccentric fixation (1555.8 ± 1708.4 ms) was much longer than the mean temporal threshold (97.0 ms ± 34.2 ms, p<0.01) for the age-matched controls at 10° in the lower visual field. Individual temporal thresholds accounted for 30% of the variance in reading speed (p<0.05). Conclusion The significant association between increased temporal threshold for letter recognition and reduced reading speed is consistent with the hypothesis that slower visual processing of letter recognition is one of the factors limiting reading speed in MD subjects. PMID:17881032

  18. Visual field defects after temporal lobe resection for epilepsy.

    PubMed

    Steensberg, Alvilda T; Olsen, Ane Sophie; Litman, Minna; Jespersen, Bo; Kolko, Miriam; Pinborg, Lars H

    2018-01-01

    To determine visual field defects (VFDs) using methods of varying complexity and compare results with subjective symptoms in a population of newly operated temporal lobe epilepsy patients. Forty patients were included in the study. Two patients failed to perform VFD testing. Humphrey Field Analyzer (HFA) perimetry was used as the gold standard test to detect VFDs. All patients performed a web-based visual field test called Damato Multifixation Campimetry Online (DMCO). A bedside confrontation visual field examination ad modum Donders was extracted from the medical records in 27/38 patients. All participants had a consultation by an ophthalmologist. A questionnaire described the subjective complaints. A VFD in the upper quadrant was demonstrated with HFA in 29 (76%) of the 38 patients after surgery. In 27 patients tested ad modum Donders, the sensitivity of detecting a VFD was 13%. Eight patients (21%) had a severe VFD similar to a quadrant anopia, thus, questioning their permission to drive a car. In this group of patients, a VFD was demonstrated in one of five (sensitivity=20%) ad modum Donders and in seven of eight (sensitivity=88%) with DMCO. Subjective symptoms were only reported by 28% of the patients with a VFD and in two of eight (sensitivity=25%) with a severe VFD. Most patients (86%) considered VFD information mandatory. VFD continue to be a frequent adverse event after epilepsy surgery in the medial temporal lobe and may affect the permission to drive a car in at least one in five patients. Subjective symptoms and bedside visual field testing ad modum Donders are not sensitive to detect even a severe VFD. Newly developed web-based visual field test methods appear sensitive to detect a severe VFD but perimetry remains the golden standard for determining if visual standards for driving is fulfilled. Patients consider VFD information as mandatory. Copyright © 2017. Published by Elsevier Ltd.

  19. Designing a visualization system for hydrological data

    NASA Astrophysics Data System (ADS)

    Fuhrmann, Sven

    2000-02-01

    The field of hydrology is, as any other scientific field, strongly affected by a massive technological evolution. The spread of modern information and communication technology within the last three decades has led to an increased collection, availability and use of spatial and temporal digital hydrological data. In a two-year research period a working group in Muenster applied and developed methods for the visualization of digital hydrological data and the documentation of hydrological models. A low-cost multimedial, hydrological visualization system (HydroVIS) for the Weser river catchment was developed. The research group designed HydroVIS under freeware constraints and tried to show what kind of multimedia visualization techniques can be effectively used with a nonprofit hydrological visualization system. The system's visual components include features such as electronic maps, temporal and nontemporal cartographic animations, the display of geologic profiles, interactive diagrams and hypertext, including photographs and tables.

  20. About Hemispheric Differences in the Processing of Temporal Intervals

    ERIC Educational Resources Information Center

    Grondin, S.; Girard, C.

    2005-01-01

    The purpose of the present study was to identify differences between cerebral hemispheres for processing temporal intervals ranging from .9 to 1.4s. The intervals to be judged were marked by series of brief visual signals located in the left or the right visual field. Series of three (two standards and one comparison) or five intervals (four…

  1. Lateralization of spatial rather than temporal attention underlies the left hemifield advantage in rapid serial visual presentation.

    PubMed

    Asanowicz, Dariusz; Kruse, Lena; Śmigasiewicz, Kamila; Verleger, Rolf

    2017-11-01

    In bilateral rapid serial visual presentation (RSVP), the second of two targets, T1 and T2, is better identified in the left visual field (LVF) than in the right visual field (RVF). This LVF advantage may reflect hemispheric asymmetry in temporal attention or/and in spatial orienting of attention. Participants performed two tasks: the "standard" bilateral RSVP task (Exp.1) and its unilateral variant (Exp.1 & 2). In the bilateral task, spatial location was uncertain, thus target identification involved stimulus-driven spatial orienting. In the unilateral task, the targets were presented block-wise in the LVF or RVF only, such that no spatial orienting was needed for target identification. Temporal attention was manipulated in both tasks by varying the T1-T2 lag. The results showed that the LVF advantage disappeared when involvement of stimulus-driven spatial orienting was eliminated, whereas the manipulation of temporal attention had no effect on the asymmetry. In conclusion, the results do not support the hypothesis of hemispheric asymmetry in temporal attention, and provide further evidence that the LVF advantage reflects right hemisphere predominance in stimulus-driven orienting of spatial attention. These conclusions fit evidence that temporal attention is implemented by bilateral parietal areas and spatial attention by the right-lateralized ventral frontoparietal network. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. A case report of ophthalmic artery emboli secondary to Calcium Hydroxylapatite filler injection for nose augmentation- long-term outcome.

    PubMed

    Cohen, Eyal; Yatziv, Yossi; Leibovitch, Igal; Kesler, Anat; Cnaan, Ran Ben; Klein, Ainat; Goldenberg, Dafna; Habot-Wilner, Zohar

    2016-07-08

    Filler injection for face augmentation is a common cosmetic procedure in the last decades, in our case report we describe long-term outcomes of a devastating complication of ophthalmic artery emboli following Calcium Hydroxylapatite filler injection to the nose bridge. A healthy 24-year-old women received a Calcium Hydroxylapatite filler injection to her nose bridge for the correction of nose asymmetry 8 years post rhinoplasty. She developed sudden right eye ocular pain and visual disturbances. Visual acuity was 20/20 in both eyes and visual field in the right eye showed inferior arch with fixation sparing and supero-temporal central scotoma. Examination revealed marked periorbital edema and hematoma, ptosis, ocular movements limitation, an infero-temporal branch retinal artery occlusion and multiple choroidal emboli. Eighteen months post initial presentation ptosis and eye movements returned normal and choroidal emboli absorbed almost completely. However, visual acuity declined to 20/60, visual field showed severe progressive deterioration with a central and supero-nasal field remnant and the optic disc became pallor. Cosmetic injection of calcium hydroxylapatite to the nose bridge can result in arterial emboli to the ophthalmic system with optic nerve, retinal and choroidal involvement causing long term severe visual acuity and visual field impairment.

  3. The role of awake craniotomy in reducing intraoperative visual field deficits during tumor surgery

    PubMed Central

    Wolfson, Racheal; Soni, Neil; Shah, Ashish H.; Hosein, Khadil; Sastry, Ananth; Bregy, Amade; Komotar, Ricardo J.

    2015-01-01

    Objective: Homonymous hemianopia due to damage to the optic radiations or visual cortex is a possible consequence of tumor resection involving the temporal or occipital lobes. The purpose of this review is to present and analyze a series of studies regarding the use of awake craniotomy (AC) to decrease visual field deficits following neurosurgery. Materials and Methods: A literature search was performed using the Medline and PubMed databases from 1970 and 2014 that compared various uses of AC other than intraoperative motor/somatosensory/language mapping with a focus on visual field mapping. Results: For the 17 patients analyzed in this study, 14 surgeries resulted in quadrantanopia, 1 in hemianopia, and 2 without visual deficits. Overall, patient satisfaction with AC was high, and AC was a means to reduce surgery-related complications and cost related with the procedure. Conclusion AC is a safe and tolerable procedure that can be used effectively to map optic radiations and the visual cortices in order to preserve visual function during resection of tumors infiltrating the temporal and occipital lobes. In the majority of cases, a homonymous hemianopia was prevented and patients were left with a quadrantanopia that did not interfere with daily function. PMID:26396597

  4. Gender-specific effects of emotional modulation on visual temporal order thresholds.

    PubMed

    Liang, Wei; Zhang, Jiyuan; Bao, Yan

    2015-09-01

    Emotions affect temporal information processing in the low-frequency time window of a few seconds, but little is known about their effect in the high-frequency domain of some tens of milliseconds. The present study aims to investigate whether negative and positive emotional states influence the ability to discriminate the temporal order of visual stimuli, and whether gender plays a role in temporal processing. Due to the hemispheric lateralization of emotion, a hemispheric asymmetry between the left and the right visual field might be expected. Using a block design, subjects were primed with neutral, negative and positive emotional pictures before performing temporal order judgment tasks. Results showed that male subjects exhibited similarly reduced order thresholds under negative and positive emotional states, while female subjects demonstrated increased threshold under positive emotional state and reduced threshold under negative emotional state. Besides, emotions influenced female subjects more intensely than male subjects, and no hemispheric lateralization was observed. These observations indicate an influence of emotional states on temporal order processing of visual stimuli, and they suggest a gender difference, which is possibly associated with a different emotional stability.

  5. An fMRI Study of the Neural Systems Involved in Visually Cued Auditory Top-Down Spatial and Temporal Attention

    PubMed Central

    Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong

    2012-01-01

    Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800

  6. Improved detection following Neuro-Eye Therapy in patients with post-geniculate brain damage.

    PubMed

    Sahraie, Arash; Macleod, Mary-Joan; Trevethan, Ceri T; Robson, Siân E; Olson, John A; Callaghan, Paula; Yip, Brigitte

    2010-09-01

    Damage to the optic radiation or the occipital cortex results in loss of vision in the contralateral visual field, termed partial cortical blindness or hemianopia. Previously, we have demonstrated that stimulation in the field defect using visual stimuli with optimal properties for blindsight detection can lead to increases in visual sensitivity within the blind field of a group of patients. The present study was aimed to extend the previous work by investigating the effect of positive feedback on recovery of visual sensitivity. Patients' abilities for detection of a range of spatial frequencies within their field defect were determined using a temporal two-alternative forced-choice technique, before and after a period of visual training (n = 4). Patients underwent Neuro-Eye Therapy which involved detection of temporally modulated spatial grating patches at specific retinal locations within their field defect. Three patients showed improved detection ability following visual training. Based on our previous studies, we had hypothesised that should the occipital brain lesion extend anteriorly to the thalamus, little recovery would be expected. Here, we describe one such case who showed no improvements after extensive training. The present study provides further evidence that recovery (a) can be gradual and may require a large number of training sessions (b) can be accelerated using positive feedback and (c) may be less likely to take place if the occipital damage extends anteriorly to the thalamus.

  7. Dioptric defocus maps across the visual field for different indoor environments.

    PubMed

    García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried

    2018-01-01

    One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the 'environmental defocus' over the visual field. At present, no devices are available that could provide this information. A 'Kinect sensor v1' camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying 'indoor defocus error signals' across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and 'defocus maps' were generated for various scenes and tasks.

  8. Functional Architecture for Disparity in Macaque Inferior Temporal Cortex and Its Relationship to the Architecture for Faces, Color, Scenes, and Visual Field

    PubMed Central

    Verhoef, Bram-Ernst; Bohon, Kaitlin S.

    2015-01-01

    Binocular disparity is a powerful depth cue for object perception. The computations for object vision culminate in inferior temporal cortex (IT), but the functional organization for disparity in IT is unknown. Here we addressed this question by measuring fMRI responses in alert monkeys to stimuli that appeared in front of (near), behind (far), or at the fixation plane. We discovered three regions that showed preferential responses for near and far stimuli, relative to zero-disparity stimuli at the fixation plane. These “near/far” disparity-biased regions were located within dorsal IT, as predicted by microelectrode studies, and on the posterior inferotemporal gyrus. In a second analysis, we instead compared responses to near stimuli with responses to far stimuli and discovered a separate network of “near” disparity-biased regions that extended along the crest of the superior temporal sulcus. We also measured in the same animals fMRI responses to faces, scenes, color, and checkerboard annuli at different visual field eccentricities. Disparity-biased regions defined in either analysis did not show a color bias, suggesting that disparity and color contribute to different computations within IT. Scene-biased regions responded preferentially to near and far stimuli (compared with stimuli without disparity) and had a peripheral visual field bias, whereas face patches had a marked near bias and a central visual field bias. These results support the idea that IT is organized by a coarse eccentricity map, and show that disparity likely contributes to computations associated with both central (face processing) and peripheral (scene processing) visual field biases, but likely does not contribute much to computations within IT that are implicated in processing color. PMID:25926470

  9. Psychophysical Evaluation of Achromatic and Chromatic Vision of Workers Chronically Exposed to Organic Solvents

    PubMed Central

    Lacerda, Eliza Maria da Costa Brito; Lima, Monica Gomes; Rodrigues, Anderson Raiol; Teixeira, Cláudio Eduardo Correa; de Lima, Lauro José Barata; Ventura, Dora Fix; Silveira, Luiz Carlos de Lima

    2012-01-01

    The purpose of this paper was to evaluate achromatic and chromatic vision of workers chronically exposed to organic solvents through psychophysical methods. Thirty-one gas station workers (31.5 ± 8.4 years old) were evaluated. Psychophysical tests were achromatic tests (Snellen chart, spatial and temporal contrast sensitivity, and visual perimetry) and chromatic tests (Ishihara's test, color discrimination ellipses, and Farnsworth-Munsell 100 hue test—FM100). Spatial contrast sensitivities of exposed workers were lower than the control at spatial frequencies of 20 and 30 cpd whilst the temporal contrast sensitivity was preserved. Visual field losses were found in 10–30 degrees of eccentricity in the solvent exposed workers. The exposed workers group had higher error values of FM100 and wider color discrimination ellipses area compared to the controls. Workers occupationally exposed to organic solvents had abnormal visual functions, mainly color vision losses and visual field constriction. PMID:22220188

  10. Dioptric defocus maps across the visual field for different indoor environments

    PubMed Central

    García, Miguel García; Ohlendorf, Arne; Schaeffel, Frank; Wahl, Siegfried

    2017-01-01

    One of the factors proposed to regulate the eye growth is the error signal derived from the defocus in the retina and actually, this might arise from defocus not only in the fovea but the whole visual field. Therefore, myopia could be better predicted by spatio-temporally mapping the ‘environmental defocus’ over the visual field. At present, no devices are available that could provide this information. A ‘Kinect sensor v1’ camera (Microsoft Corp.) and a portable eye tracker were used for developing a system for quantifying ‘indoor defocus error signals’ across the central 58° of the visual field. Dioptric differences relative to the fovea (assumed to be in focus) were recorded over the visual field and ‘defocus maps’ were generated for various scenes and tasks. PMID:29359108

  11. Figure–ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses

    PubMed Central

    Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.

    2014-01-01

    The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267

  12. Visual abilities in two raptors with different ecology.

    PubMed

    Potier, Simon; Bonadonna, Francesco; Kelber, Almut; Martin, Graham R; Isard, Pierre-François; Dulaurent, Thomas; Duriez, Olivier

    2016-09-01

    Differences in visual capabilities are known to reflect differences in foraging behaviour even among closely related species. Among birds, the foraging of diurnal raptors is assumed to be guided mainly by vision but their foraging tactics include both scavenging upon immobile prey and the aerial pursuit of highly mobile prey. We studied how visual capabilities differ between two diurnal raptor species of similar size: Harris's hawks, Parabuteo unicinctus, which take mobile prey, and black kites, Milvus migrans, which are primarily carrion eaters. We measured visual acuity, foveal characteristics and visual fields in both species. Visual acuity was determined using a behavioural training technique; foveal characteristics were determined using ultra-high resolution spectral-domain optical coherence tomography (OCT); and visual field parameters were determined using an ophthalmoscopic reflex technique. We found that these two raptors differ in their visual capacities. Harris's hawks have a visual acuity slightly higher than that of black kites. Among the five Harris's hawks tested, individuals with higher estimated visual acuity made more horizontal head movements before making a decision. This may reflect an increase in the use of monocular vision. Harris's hawks have two foveas (one central and one temporal), while black kites have only one central fovea and a temporal area. Black kites have a wider visual field than Harris's hawks. This may facilitate the detection of conspecifics when they are scavenging. These differences in the visual capabilities of these two raptors may reflect differences in the perceptual demands of their foraging behaviours. © 2016. Published by The Company of Biologists Ltd.

  13. The SCHEIE Visual Field Grading System

    PubMed Central

    Sankar, Prithvi S.; O’Keefe, Laura; Choi, Daniel; Salowe, Rebecca; Miller-Ellis, Eydie; Lehman, Amanda; Addis, Victoria; Ramakrishnan, Meera; Natesh, Vikas; Whitehead, Gideon; Khachatryan, Naira; O’Brien, Joan

    2017-01-01

    Objective No method of grading visual field (VF) defects has been widely accepted throughout the glaucoma community. The SCHEIE (Systematic Classification of Humphrey visual fields-Easy Interpretation and Evaluation) grading system for glaucomatous visual fields was created to convey qualitative and quantitative information regarding visual field defects in an objective, reproducible, and easily applicable manner for research purposes. Methods The SCHEIE grading system is composed of a qualitative and quantitative score. The qualitative score consists of designation in one or more of the following categories: normal, central scotoma, paracentral scotoma, paracentral crescent, temporal quadrant, nasal quadrant, peripheral arcuate defect, expansive arcuate, or altitudinal defect. The quantitative component incorporates the Humphrey visual field index (VFI), location of visual defects for superior and inferior hemifields, and blind spot involvement. Accuracy and speed at grading using the qualitative and quantitative components was calculated for non-physician graders. Results Graders had a median accuracy of 96.67% for their qualitative scores and a median accuracy of 98.75% for their quantitative scores. Graders took a mean of 56 seconds per visual field to assign a qualitative score and 20 seconds per visual field to assign a quantitative score. Conclusion The SCHEIE grading system is a reproducible tool that combines qualitative and quantitative measurements to grade glaucomatous visual field defects. The system aims to standardize clinical staging and to make specific visual field defects more easily identifiable. Specific patterns of visual field loss may also be associated with genetic variants in future genetic analysis. PMID:28932621

  14. The Right Hemisphere Advantage in Visual Change Detection Depends on Temporal Factors

    ERIC Educational Resources Information Center

    Spotorno, Sara; Faure, Sylvane

    2011-01-01

    What accounts for the Right Hemisphere (RH) functional superiority in visual change detection? An original task which combines one-shot and divided visual field paradigms allowed us to direct change information initially to the RH or the Left Hemisphere (LH) by deleting, respectively, an object included in the left or right half of a scene…

  15. Assessment of the vision-specific quality of life using clustered visual field in glaucoma patients.

    PubMed

    Sawada, Hideko; Yoshino, Takaiko; Fukuchi, Takeo; Abe, Haruki

    2014-02-01

    To investigate the significance of vision-specific quality of life (QOL) in glaucoma patients based on the location of visual field defects. We examined 336 eyes of 168 patients. The 25-item National Eye Institute Visual Function Questionnaire was used to evaluate patients' QOL. Visual field testing was performed using the Humphrey Field Analyzer; the visual field was divided into 10 clusters. We defined the eye with better mean deviation as the better eye and the fellow eye as the worse eye. A single linear regression analysis was applied to assess the significance of the relationship between QOL and the clustered visual field. The strongest correlation was observed in the lower paracentral visual field in the better eye. The lower peripheral visual field in the better eye also showed a good correlation. Correlation coefficients in the better eye were generally higher than those in the worse eye. For driving, the upper temporal visual field in the better eye was the most strongly correlated (r=0.509). For role limitation and peripheral vision, the lower peripheral visual field in the better eye had the highest correlation coefficients at 0.459 and 0.425, respectively. Overall, clusters in the lower hemifield in the better eye were more strongly correlated with QOL than those in the worse eye. In particular, the lower paracentral visual field in the better eye was correlated most strongly of all. Driving, however, strongly correlated with the upper hemifield in the better eye.

  16. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  17. Sound imaging of nocturnal animal calls in their natural habitat.

    PubMed

    Mizumoto, Takeshi; Aihara, Ikkyu; Otsuka, Takuma; Takeda, Ryu; Aihara, Kazuyuki; Okuno, Hiroshi G

    2011-09-01

    We present a novel method for imaging acoustic communication between nocturnal animals. Investigating the spatio-temporal calling behavior of nocturnal animals, e.g., frogs and crickets, has been difficult because of the need to distinguish many animals' calls in noisy environments without being able to see them. Our method visualizes the spatial and temporal dynamics using dozens of sound-to-light conversion devices (called "Firefly") and an off-the-shelf video camera. The Firefly, which consists of a microphone and a light emitting diode, emits light when it captures nearby sound. Deploying dozens of Fireflies in a target area, we record calls of multiple individuals through the video camera. We conduct two experiments, one indoors and the other in the field, using Japanese tree frogs (Hyla japonica). The indoor experiment demonstrates that our method correctly visualizes Japanese tree frogs' calling behavior. It has confirmed the known behavior; two frogs call synchronously or in anti-phase synchronization. The field experiment (in a rice paddy where Japanese tree frogs live) also visualizes the same calling behavior to confirm anti-phase synchronization in the field. Experimental results confirm that our method can visualize the calling behavior of nocturnal animals in their natural habitat.

  18. Frequency-following and connectivity of different visual areas in response to contrast-reversal stimulation.

    PubMed

    Stephen, Julia M; Ranken, Doug F; Aine, Cheryl J

    2006-01-01

    The sensitivity of visual areas to different temporal frequencies, as well as the functional connections between these areas, was examined using magnetoencephalography (MEG). Alternating circular sinusoids (0, 3.1, 8.7 and 14 Hz) were presented to foveal and peripheral locations in the visual field to target ventral and dorsal stream structures, respectively. It was hypothesized that higher temporal frequencies would preferentially activate dorsal stream structures. To determine the effect of frequency on the cortical response we analyzed the late time interval (220-770 ms) using a multi-dipole spatio-temporal analysis approach to provide source locations and timecourses for each condition. As an exploratory aspect, we performed cross-correlation analysis on the source timecourses to determine which sources responded similarly within conditions. Contrary to predictions, dorsal stream areas were not activated more frequently during high temporal frequency stimulation. However, across cortical sources the frequency-following response showed a difference, with significantly higher power at the second harmonic for the 3.1 and 8.7 Hz stimulation and at the first and second harmonics for the 14 Hz stimulation with this pattern seen robustly in area V1. Cross-correlations of the source timecourses showed that both low- and high-order visual areas, including dorsal and ventral stream areas, were significantly correlated in the late time interval. The results imply that frequency information is transferred to higher-order visual areas without translation. Despite the less complex waveforms seen in the late interval of time, the cross-correlation results show that visual, temporal and parietal cortical areas are intricately involved in late-interval visual processing.

  19. A lightning strike to the head causing a visual cortex defect with simple and complex visual hallucinations

    PubMed Central

    Kleiter, Ingo; Luerding, Ralf; Diendorfer, Gerhard; Rek, Helga; Bogdahn, Ulrich; Schalke, Berthold

    2007-01-01

    The case of a 23‐year‐old mountaineer who was hit by a lightning strike to the occiput causing a large central visual field defect and bilateral tympanic membrane ruptures is described. Owing to extreme agitation, the patient was set to a drug‐induced coma for 3 days. After extubation, she experienced simple and complex visual hallucinations for several days, but otherwise recovered largely. Neuropsychological tests revealed deficits in fast visual detection tasks and non‐verbal learning, and indicated a right temporal lobe dysfunction, consistent with a right temporal focus on electroencephalography. Four months after the accident, she developed a psychological reaction consisting of nightmares with reappearance of the complex visual hallucinations and a depressive syndrome. Using the European Cooperation for Lightning Detection network, a meteorological system for lightning surveillance, the exact geographical location and nature of the lightning flash were retrospectively retraced. PMID:17369595

  20. A lightning strike to the head causing a visual cortex defect with simple and complex visual hallucinations

    PubMed Central

    Kleiter, Ingo; Luerding, Ralf; Diendorfer, Gerhard; Rek, Helga; Bogdahn, Ulrich; Schalke, Berthold

    2009-01-01

    The case of a 23-year-old mountaineer who was hit by a lightning strike to the occiput causing a large central visual field defect and bilateral tympanic membrane ruptures is described. Owing to extreme agitation, the patient was sent into a drug-induced coma for 3 days. After extubation, she experienced simple and complex visual hallucinations for several days, but otherwise largely recovered. Neuropsychological tests revealed deficits in fast visual detection tasks and non-verbal learning and indicated a right temporal lobe dysfunction, consistent with a right temporal focus on electroencephalography. At 4 months after the accident, she developed a psychological reaction consisting of nightmares, with reappearance of the complex visual hallucinations and a depressive syndrome. Using the European Cooperation for Lightning Detection network, a meteorological system for lightning surveillance, the exact geographical location and nature of the lightning strike were retrospectively retraced PMID:21734915

  1. Can Blindsight Be Superior to "Sighted-Sight"?

    ERIC Educational Resources Information Center

    Trevethan, Ceri T.; Sahraie, Arash; Weiskrantz, Larry

    2007-01-01

    DB, the first blindsight case to be tested extensively (Weiskrantz, 1986) has demonstrated the ability to detect and discriminate a range of visual stimuli presented within his perimetrically blind visual field defect. In a temporal two alternative forced choice (2AFC) detection experiment we have investigated the limits of DB's detection ability…

  2. Impaired temporal, not just spatial, resolution in amblyopia.

    PubMed

    Spang, Karoline; Fahle, Manfred

    2009-11-01

    In amblyopia, neuronal deficits deteriorate spatial vision including visual acuity, possibly because of a lack of use-dependent fine-tuning of afferents to the visual cortex during infancy; but temporal processing may deteriorate as well. Temporal, rather than spatial, resolution was investigated in patients with amblyopia by means of a task based on time-defined figure-ground segregation. Patients had to indicate the quadrant of the visual field where a purely time-defined square appeared. The results showed a clear decrease in temporal resolution of patients' amblyopic eyes compared with the dominant eyes in this task. The extent of this decrease in figure-ground segregation based on time of motion onset only loosely correlated with the decrease in spatial resolution and spanned a smaller range than did the spatial loss. Control experiments with artificially induced blur in normal observers confirmed that the decrease in temporal resolution was not simply due to the acuity loss. Amblyopia not only decreases spatial resolution, but also temporal factors such as time-based figure-ground segregation, even at high stimulus contrasts. This finding suggests that the realm of neuronal processes that may be disturbed in amblyopia is larger than originally thought.

  3. Retinal nerve fiber layer thickness measured with optical coherence tomography is related to visual function in glaucomatous eyes.

    PubMed

    El Beltagi, Tarek A; Bowd, Christopher; Boden, Catherine; Amini, Payam; Sample, Pamela A; Zangwill, Linda M; Weinreb, Robert N

    2003-11-01

    To determine the relationship between areas of glaucomatous retinal nerve fiber layer thinning identified by optical coherence tomography and areas of decreased visual field sensitivity identified by standard automated perimetry in glaucomatous eyes. Retrospective observational case series. Forty-three patients with glaucomatous optic neuropathy identified by optic disc stereo photographs and standard automated perimetry mean deviations >-8 dB were included. Participants were imaged with optical coherence tomography within 6 months of reliable standard automated perimetry testing. The location and number of optical coherence tomography clock hour retinal nerve fiber layer thickness measures outside normal limits were compared with the location and number of standard automated perimetry visual field zones outside normal limits. Further, the relationship between the deviation from normal optical coherence tomography-measured retinal nerve fiber layer thickness at each clock hour and the average pattern deviation in each visual field zone was examined by using linear regression (R(2)). The retinal nerve fiber layer areas most frequently outside normal limits were the inferior and inferior temporal regions. The least sensitive visual field zones were in the superior hemifield. Linear regression results (R(2)) showed that deviation from the normal retinal nerve fiber layer thickness at optical coherence tomography clock hour positions 6 o'clock, 7 o'clock, and 8 o'clock (inferior and inferior temporal) was best correlated with standard automated perimetry pattern deviation in visual field zones corresponding to the superior arcuate and nasal step regions (R(2) range, 0.34-0.57). These associations were much stronger than those between clock hour position 6 o'clock and the visual field zone corresponding to the inferior nasal step region (R(2) = 0.01). Localized retinal nerve fiber layer thinning, measured by optical coherence tomography, is topographically related to decreased localized standard automated perimetry sensitivity in glaucoma patients.

  4. Enhancement of Temporal Resolution and BOLD Sensitivity in Real-Time fMRI using Multi-Slab Echo-Volumar Imaging

    PubMed Central

    Posse, Stefan; Ackley, Elena; Mutihac, Radu; Rick, Jochen; Shane, Matthew; Murray-Krezan, Cristina; Zaitsev, Maxim; Speck, Oliver

    2012-01-01

    In this study, a new approach to high-speed fMRI using multi-slab echo-volumar imaging (EVI) is developed that minimizes geometrical image distortion and spatial blurring, and enables nonaliased sampling of physiological signal fluctuation to increase BOLD sensitivity compared to conventional echo-planar imaging (EPI). Real-time fMRI using whole brain 4-slab EVI with 286 ms temporal resolution (4 mm isotropic voxel size) and partial brain 2-slab EVI with 136 ms temporal resolution (4×4×6 mm3 voxel size) was performed on a clinical 3 Tesla MRI scanner equipped with 12-channel head coil. Four-slab EVI of visual and motor tasks significantly increased mean (visual: 96%, motor: 66%) and maximum t-score (visual: 263%, motor: 124%) and mean (visual: 59%, motor: 131%) and maximum (visual: 29%, motor: 67%) BOLD signal amplitude compared with EPI. Time domain moving average filtering (2 s width) to suppress physiological noise from cardiac and respiratory fluctuations further improved mean (visual: 196%, motor: 140%) and maximum (visual: 384%, motor: 200%) t-scores and increased extents of activation (visual: 73%, motor: 70%) compared to EPI. Similar sensitivity enhancement, which is attributed to high sampling rate at only moderately reduced temporal signal-to-noise ratio (mean: − 52%) and longer sampling of the BOLD effect in the echo-time domain compared to EPI, was measured in auditory cortex. Two-slab EVI further improved temporal resolution for measuring task-related activation and enabled mapping of five major resting state networks (RSNs) in individual subjects in 5 min scans. The bilateral sensorimotor, the default mode and the occipital RSNs were detectable in time frames as short as 75 s. In conclusion, the high sampling rate of real-time multi-slab EVI significantly improves sensitivity for studying the temporal dynamics of hemodynamic responses and for characterizing functional networks at high field strength in short measurement times. PMID:22398395

  5. Noninvasive studies of human visual cortex using neuromagnetic techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aine, C.J.; George, J.S.; Supek, S.

    1990-01-01

    The major goals of noninvasive studies of the human visual cortex are: to increase knowledge of the functional organization of cortical visual pathways; and to develop noninvasive clinical tests for the assessment of cortical function. Noninvasive techniques suitable for studies of the structure and function of human visual cortex include magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission tomography (SPECT), scalp recorded event-related potentials (ERPs), and event-related magnetic fields (ERFs). The primary challenge faced by noninvasive functional measures is to optimize the spatial and temporal resolution of the measurement and analytic techniques in order to effectively characterizemore » the spatial and temporal variations in patterns of neuronal activity. In this paper we review the use of neuromagnetic techniques for this purpose. 8 refs., 3 figs.« less

  6. Wide field-of-view, multi-region two-photon imaging of neuronal activity in the mammalian brain

    PubMed Central

    Stirman, Jeffrey N.; Smith, Ikuko T.; Kudenov, Michael W.; Smith, Spencer L.

    2016-01-01

    Two-photon calcium imaging provides an optical readout of neuronal activity in populations of neurons with subcellular resolution. However, conventional two-photon imaging systems are limited in their field of view to ~1 mm2, precluding the visualization of multiple cortical areas simultaneously. Here, we demonstrate a two-photon microscope with an expanded field of view (>9.5 mm2) for rapidly reconfigurable simultaneous scanning of widely separated populations of neurons. We custom designed and assembled an optimized scan engine, objective, and two independently positionable, temporally multiplexed excitation pathways. We used this new microscope to measure activity correlations between two cortical visual areas in mice during visual processing. PMID:27347754

  7. Presumed topiramate retinopathy: a case report.

    PubMed

    Yeung, Tiffany L M; Li, Patrick S H; Li, Kenneth K W

    2016-08-01

    We report a case of peripheral pigmentary retinopathy and visual field loss following topiramate use for uncontrolled seizures. Such side effects have not been well documented despite the increasing use of topiramate in the past 10 years. A thorough search of available English literature revealed only a small number of reports of topiramate-induced retinopathy or visual field defects in humans. One similar case has been described. We are concerned about the possible rare instances of this occurrence in future patients and hence would like to propose a presumed correlation. A 48-year-old Chinese woman developed blurred vision after 9 months of topiramate use. Her visual acuity dropped from 1.2 to 0.7 in both eyes, with bilateral diffuse pigmentary retinopathy and a constricted visual field. Despite an improvement in visual acuity after cessation of the drug, the other clinical findings remained. The temporal relationship between the initiation of topiramate and the visual disturbance suggests that topiramate could be the cause of such signs and symptoms. Topiramate potentially causes pigmentary retinopathy and constricted visual field.

  8. Local and Global Correlations between Neurons in the Middle Temporal Area of Primate Visual Cortex.

    PubMed

    Solomon, Selina S; Chen, Spencer C; Morley, John W; Solomon, Samuel G

    2015-09-01

    In humans and other primates, the analysis of visual motion includes populations of neurons in the middle-temporal (MT) area of visual cortex. Motion analysis will be constrained by the structure of neural correlations in these populations. Here, we use multi-electrode arrays to measure correlations in anesthetized marmoset, a New World monkey where area MT lies exposed on the cortical surface. We measured correlations in the spike count between pairs of neurons and within populations of neurons, for moving dot fields and moving gratings. Correlations were weaker in area MT than in area V1. The magnitude of correlations in area MT diminished with distance between receptive fields, and difference in preferred direction. Correlations during presentation of moving gratings were stronger than those during presentation of moving dot fields, extended further across cortex, and were less dependent on the functional properties of neurons. Analysis of the timescales of correlation suggests presence of 2 mechanisms. A local mechanism, associated with near-synchronous spiking activity, is strongest in nearby neurons with similar direction preference and is independent of visual stimulus. A global mechanism, operating over larger spatial scales and longer timescales, is independent of direction preference and is modulated by the type of visual stimulus presented. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. High-speed imaging of submerged jet: visualization analysis using proper orthogonality decomposition

    NASA Astrophysics Data System (ADS)

    Liu, Yingzheng; He, Chuangxin

    2016-11-01

    In the present study, the submerged jet at low Reynolds numbers was visualized using laser induced fluoresce and high-speed imaging in a water tank. Well-controlled calibration was made to determine linear dependency region of the fluoresce intensity on its concentration. Subsequently, the jet fluid issuing from a circular pipe was visualized using a high-speed camera. The animation sequence of the visualized jet flow field was supplied for the snapshot proper orthogonality decomposition (POD) analysis. Spatio-temporally varying structures superimposed in the unsteady fluid flow were identified, e.g., the axisymmetric mode and the helical mode, which were reflected from the dominant POD modes. The coefficients of the POD modes give strong indication of temporal and spectral features of the corresponding unsteady events. The reconstruction using the time-mean visualization and the selected POD modes was conducted to reveal the convective motion of the buried vortical structures. National Natural Science Foundation of China.

  10. An insect-inspired model for visual binding I: learning objects and their characteristics.

    PubMed

    Northcutt, Brandon D; Dyhr, Jonathan P; Higgins, Charles M

    2017-04-01

    Visual binding is the process of associating the responses of visual interneurons in different visual submodalities all of which are responding to the same object in the visual field. Recently identified neuropils in the insect brain termed optic glomeruli reside just downstream of the optic lobes and have an internal organization that could support visual binding. Working from anatomical similarities between optic and olfactory glomeruli, we have developed a model of visual binding based on common temporal fluctuations among signals of independent visual submodalities. Here we describe and demonstrate a neural network model capable both of refining selectivity of visual information in a given visual submodality, and of associating visual signals produced by different objects in the visual field by developing inhibitory neural synaptic weights representing the visual scene. We also show that this model is consistent with initial physiological data from optic glomeruli. Further, we discuss how this neural network model may be implemented in optic glomeruli at a neuronal level.

  11. Parahippocampectomy as a New Surgical Approach to Mesial Temporal Lobe Epilepsy Caused By Hippocampal Sclerosis: A Pilot Randomized Comparative Clinical Trial.

    PubMed

    Alonso-Vanegas, Mario Arturo; Freire Carlier, Iván D; San-Juan, Daniel; Martínez, Alma Rosa; Trenado, Carlos

    2018-02-01

    The parahippocampal gyrus plays an important role in the epileptogenic pathways of mesial temporal lobe epilepsy caused by hippocampal sclerosis (mTLE-HS); its resection could prevent epileptic seizures with fewer complications. This study evaluates the initial efficacy and safety of anterior temporal lobectomy (ATL), selective amygdalohipppocampectomy (SAH), and parahippocampectomy (PHC) surgical approaches in mTLE-HS. A randomized comparative pilot clinical trial (2008-2011) was performed that included patients with mTLE-HS who underwent ATL, trans-T3 SAH, and trans-T3 PHC. Their sociodemographic characteristics, visual field profiles, verbal and visual memory profiles, and Engel scale outcome at baseline and at 1 and 5 years are described, using descriptive statistics along with parametric and nonparametric tests. Forty-three patients with a mean age of 35.2 years (18-56 years), 65% female, were analyzed: 14 underwent PHC, 14 ATL, and 15 SAH. The following percentages refer to those patients who were seizure free (Engel class IA) at 1-year and 5-year follow-up, respectively: 42.9% PHC, 71.4% ATL, and 60% SAH (P = 0.304); 28.6% PHC, 50% ATL, and 53.3% SAH (P = 0.353). Postoperative visual field deficits were 0% PHC, 85.7% ATL, and 46.7% SAH (P = 0.001). Verbal and/or visual memory worsening were present in 21.3% PHC, 42.8% ATL, and 33.4% SAH (P = 0.488) and preoperative and postoperative visual memory scores were significantly different in the SAH group only (P = 0.046). PHC, ALT, and SAH show a preliminary similar efficacy in short-term seizure-free rates in patients with mTLE-HS. However, PHC efficacy in the long-term decreases compared with the other surgical techniques. PHC does not produce postoperative visual field deficits. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  13. Computing and visualizing time-varying merge trees for high-dimensional data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Heine, Christian; Weber, Gunther H.

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  14. Electrophysiological evidence for the left-lateralized effect of language on preattentive categorical perception of color

    PubMed Central

    Mo, Lei; Xu, Guiping; Kay, Paul; Tan, Li-Hai

    2011-01-01

    Previous studies have shown that the effect of language on categorical perception of color is stronger when stimuli are presented in the right visual field than in the left. To examine whether this lateralized effect occurs preattentively at an early stage of processing, we monitored the visual mismatch negativity, which is a component of the event-related potential of the brain to an unfamiliar stimulus among a temporally presented series of stimuli. In the oddball paradigm we used, the deviant stimuli were unrelated to the explicit task. A significant interaction between color-pair type (within-category vs. between-category) and visual field (left vs. right) was found. The amplitude of the visual mismatch negativity component evoked by the within-category deviant was significantly smaller than that evoked by the between-category deviant when displayed in the right visual field, but no such difference was observed for the left visual field. This result constitutes electroencephalographic evidence that the lateralized Whorf effect per se occurs out of awareness and at an early stage of processing. PMID:21844340

  15. Retinal and Optic Nerve Degeneration in Patients with Multiple Sclerosis Followed up for 5 Years.

    PubMed

    Garcia-Martin, Elena; Ara, Jose R; Martin, Jesus; Almarcegui, Carmen; Dolz, Isabel; Vilades, Elisa; Gil-Arribas, Laura; Fernandez, Francisco J; Polo, Vicente; Larrosa, Jose M; Pablo, Luis E; Satue, Maria

    2017-05-01

    To quantify retinal nerve fiber layer (RNFL) changes in patients with multiple sclerosis (MS) and healthy controls with a 5-year follow-up and to analyze correlations between disability progression and RNFL degeneration. Observational and longitudinal study. One hundred patients with relapsing-remitting MS and 50 healthy controls. All participants underwent a complete ophthalmic and electrophysiologic exploration and were re-evaluated annually for 5 years. Visual acuity (Snellen chart), color vision (Ishihara pseudoisochromatic plates), visual field examination, optical coherence tomography (OCT), scanning laser polarimetry (SLP), and visual evoked potentials. Expanded Disability Status Scale (EDSS) scores, disease duration, treatments, prior optic neuritis episodes, and quality of life (QOL; based on the 54-item Multiple Sclerosis Quality of Life Scale score). Optical coherence tomography (OCT) revealed changes in all RNFL thicknesses in both groups. In the MS group, changes were detected in average thickness and in the mean deviation using the GDx-VCC nerve fiber analyzer (Laser Diagnostic Technologies, San Diego, CA) and in the P100 latency of visual evoked potentials; no changes were detected in visual acuity, color vision, or visual fields. Optical coherence tomography showed greater differences in the inferior and temporal RNFL thicknesses in both groups. In MS patients only, OCT revealed a moderate correlation between the increase in EDSS and temporal and superior RNFL thinning. Temporal RNFL thinning based on OCT results was correlated moderately with decreased QOL. Multiple sclerosis patients exhibit a progressive axonal loss in the optic nerve fiber layer. Retinal nerve fiber layer thinning based on OCT results is a useful marker for assessing MS progression and correlates with increased disability and reduced QOL. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  16. Space-time light field rendering.

    PubMed

    Wang, Huamin; Sun, Mingxuan; Yang, Ruigang

    2007-01-01

    In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.

  17. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  18. Relating Standardized Visual Perception Measures to Simulator Visual System Performance

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Sweet, Barbara T.

    2013-01-01

    Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).

  19. Receptoral and Neural Aliasing.

    DTIC Science & Technology

    1993-01-30

    standard psychophysical methods. Stereoscoptc capability makes VisionWorks ideal for investigating and simulating strabismus and amblyopia , or developing... amblyopia . OElectrophyslological and psychophysical response to spatio-temporal and novel stimuli for investipttion of visual field deficits

  20. Temporal profile of functional visual rehabilitative outcomes modulated by transcranial direct current stimulation.

    PubMed

    Plow, Ela B; Obretenova, Souzana N; Jackson, Mary Lou; Merabet, Lotfi B

    2012-07-01

    We have previously reported that transcranial direct current stimulation (tDCS) delivered to the occipital cortex enhances visual functional recovery when combined with three months of computer-based rehabilitative training in patients with hemianopia. The principal objective of this study was to evaluate the temporal sequence of effects of tDCS on visual recovery as they appear over the course of training and across different indicators of visual function. Primary objective outcome measures were 1) shifts in visual field border and 2) stimulus detection accuracy within the affected hemifield. These were compared between patients randomized to either vision restoration therapy (VRT) combined with active tDCS or VRT paired with sham tDCS. Training comprised two half-hour sessions, three times a week for three months. Primary outcome measures were collected at baseline (pretest), monthly interim intervals, and at posttest (three months). As secondary outcome measures, contrast sensitivity and reading performance were collected at pretest and posttest time points only. Active tDCS combined with VRT accelerated the recovery of stimulus detection as between-group differences appeared within the first month of training. In contrast, a shift in the visual field border was only evident at posttest (after three months of training). tDCS did not affect contrast sensitivity or reading performance. These results suggest that tDCS may differentially affect the magnitude and sequence of visual recovery in a manner that is task specific to the type of visual rehabilitative training strategy employed. © 2012 International Neuromodulation Society.

  1. Organization of area hV5/MT+ in subjects with homonymous visual field defects.

    PubMed

    Papanikolaou, Amalia; Keliris, Georgios A; Papageorgiou, T Dorina; Schiefer, Ulrich; Logothetis, Nikos K; Smirnakis, Stelios M

    2018-04-06

    Damage to the primary visual cortex (V1) leads to a visual field loss (scotoma) in the retinotopically corresponding part of the visual field. Nonetheless, a small amount of residual visual sensitivity persists within the blind field. This residual capacity has been linked to activity observed in the middle temporal area complex (V5/MT+). However, it remains unknown whether the organization of hV5/MT+ changes following early visual cortical lesions. We studied the organization of area hV5/MT+ of five patients with dense homonymous defects in a quadrant of the visual field as a result of partial V1+ or optic radiation lesions. To do so, we developed a new method, which models the boundaries of population receptive fields directly from the BOLD signal of each voxel in the visual cortex. We found responses in hV5/MT+ arising inside the scotoma for all patients and identified two possible sources of activation: 1) responses might originate from partially lesioned parts of area V1 corresponding to the scotoma, and 2) responses can also originate independent of area V1 input suggesting the existence of functional V1-bypassing pathways. Apparently, visually driven activity observed in hV5/MT+ is not sufficient to mediate conscious vision. More surprisingly, visually driven activity in corresponding regions of V1 and early extrastriate areas including hV5/MT+ did not guarantee visual perception in the group of patients with post-geniculate lesions that we examined. This suggests that the fine coordination of visual activity patterns across visual areas may be an important determinant of whether visual perception persists following visual cortical lesions. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. The risk of pedestrian collisions with peripheral visual field loss.

    PubMed

    Peli, Eli; Apfelbaum, Henry; Berson, Eliot L; Goldstein, Robert B

    2016-12-01

    Patients with peripheral field loss complain of colliding with other pedestrians in open-space environments such as shopping malls. Field expansion devices (e.g., prisms) can create artificial peripheral islands of vision. We investigated the visual angle at which these islands can be most effective for avoiding pedestrian collisions, by modeling the collision risk density as a function of bearing angle of pedestrians relative to the patient. Pedestrians at all possible locations were assumed to be moving in all directions with equal probability within a reasonable range of walking speeds. The risk density was found to be highly anisotropic. It peaked at ≈45° eccentricity. Increasing pedestrian speed range shifted the risk to higher eccentricities. The risk density is independent of time to collision. The model results were compared to the binocular residual peripheral island locations of 42 patients with forms of retinitis pigmentosa. The natural residual island prevalence also peaked nasally at about 45° but temporally at about 75°. This asymmetry resulted in a complementary coverage of the binocular field of view. Natural residual binocular island eccentricities seem well matched to the collision-risk density function, optimizing detection of other walking pedestrians (nasally) and of faster hazards (temporally). Field expansion prism devices will be most effective if they can create artificial peripheral islands at about 45° eccentricities. The collision risk and residual island findings raise interesting questions about normal visual development.

  3. The role of the right posterior parietal cortex in temporal order judgment.

    PubMed

    Woo, Sung-Ho; Kim, Ki-Hyun; Lee, Kyoung-Min

    2009-03-01

    Perceived order of two consecutive stimuli may not correspond to the order of their physical onsets. Such a disagreement presumably results from a difference in the speed of stimulus processing toward central decision mechanisms. Since previous evidence suggests that the right posterior parietal cortex (PPC) plays a role in modulating the processing speed of a visual target, we applied single-pulse TMS over the region in 14 normal subjects, while they judged the temporal order of two consecutive visual stimuli. Stimulus-onset-asynchrony (SOA) randomly varied between -100 and 100 ms in 20-ms steps (with a positive SOA when a target appeared on the right hemi-field before the other on the left), and a point of subjective simultaneity was measured for individual subjects. TMS stimulation was time-locked at 50, 100, 150, and 200 ms after the onset of the first stimulus, and results in trials with TMS on right PPC were compared with those in trials without TMS. TMS over the right PPC delayed the detection of a visual target in the contralateral, i.e., left hemi-field by 24 (+/-7 SE) ms and 16 (+/-4 SE) ms, when the stimulation was given at 50 and 100 ms after the first target onset. In contrast, TMS on the left PPC was not effective. These results show that the right PPC is important in a timely detection of a target appearing on the left visual field, especially in competition with another target simultaneously appearing in the opposite field.

  4. Emulating the Visual Receptive Field Properties of MST Neurons with a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Perrone, John A.; Stone, Leland S.

    1997-01-01

    We have previously proposed a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors acting as templates for specific instances of self-motion. The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to neurons found in the middle temporal (MT) area of primate extrastriate visual cortex. The model detectors were designed to extract self-translation (heading), self-rotation, as well as the scene layout (relative distances) ahead of a moving observer, and are arranged in cortical-like heading maps to perform this function. Heading estimation from optic flow has been postulated by some to be implemented within the medial superior temporal (MST) area. Others have questioned whether MST neurons can fulfill this role because some of their receptive-field properties appear inconsistent with a role in heading estimation. To resolve this issue, we systematically compared MST single-unit responses with the outputs of model detectors under matched stimulus conditions. We found that the basic physiological properties of MST neurons can be explained by the template model. We conclude that MST neurons are well suited to support heading estimation and that the template model provides an explicit set of testable hypotheses which can guide future exploration of MST and adjacent areas within the primate superior temporal sulcus.

  5. Comparison between visual field defect in pigmentary glaucoma and primary open-angle glaucoma.

    PubMed

    Nilforushan, Naveed; Yadgari, Maryam; Jazayeri, Anisalsadat

    2016-10-01

    To compare visual field defect patterns between pigmentary glaucoma and primary open-angle glaucoma. Retrospective, comparative study. Patients with diagnosis of primary open-angle glaucoma (POAG) and pigmentary glaucoma (PG) in mild to moderate stages were enrolled in this study. Each of the 52 point locations in total and pattern deviation plot (excluding 2 points adjacent to blind spot) of 24-2 Humphrey visual field as well as six predetermined sectors were compared using SPSS software version 20. Comparisons between 2 groups were performed with the Student t test for continuous variables and the Chi-square test for categorical variables. Thirty-eight eyes of 24 patients with a mean age of 66.26 ± 11 years (range 48-81 years) in the POAG group and 36 eyes of 22 patients with a mean age of 50.52 ± 11 years (range 36-69 years) in the PG group were studied. (P = 0.00). More deviation was detected in points 1, 3, 4, and 32 in total deviation (P = 0.03, P = 0.015, P = 0.018, P = 0.023) and in points 3, 4, and 32 in pattern deviation (P = 0.015, P = 0.049, P = 0.030) in the POAG group, which are the temporal parts of the field. It seems that the temporal area of the visual field in primary open-angle glaucoma is more susceptible to damage in comparison with pigmentary glaucoma.

  6. Brightness Induction and Suprathreshold Vision: Effects of Age and Visual Field

    PubMed Central

    McCourt, Mark E.; Leone, Lynnette M.; Blakeslee, Barbara

    2014-01-01

    A variety of visual capacities show significant age-related alterations. We assessed suprathreshold contrast and brightness perception across the lifespan in a large sample of healthy participants (N = 155; 142) ranging in age from 16–80 years. Experiment 1 used a quadrature-phase motion cancelation technique (Blakeslee & McCourt, 2008) to measure canceling contrast (in central vision) for induced gratings at two temporal frequencies (1 Hz and 4 Hz) at two test field heights (0.5° or 2° × 38.7°; 0.052 c/d). There was a significant age-related reduction in canceling contrast at 4 Hz, but not at 1 Hz. We find no age-related change in induction magnitude in the 1 Hz condition. We interpret the age-related decline in grating induction magnitude at 4 Hz to reflect a diminished capacity for inhibitory processing at higher temporal frequencies. In Experiment 2 participants adjusted the contrast of a matching grating (0.5° or 2° × 38.7°; 0.052 c/d) to equal that of both real (30% contrast, 0.052 c/d) and induced (McCourt, 1982) standard gratings (100% inducing grating contrast; 0.052 c/d). Matching gratings appeared in the upper visual field (UVF) and test gratings appeared in the lower visual field (LVF), and vice versa, at eccentricities of ±7.5°. Average induction magnitude was invariant with age for both test field heights. There was a significant age-related reduction in perceived contrast of stimuli in the LVF versus UVF for both real and induced gratings. PMID:25462024

  7. Robust selectivity to two-object images in human visual cortex

    PubMed Central

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  8. Emotion processing in the visual brain: a MEG analysis.

    PubMed

    Peyk, Peter; Schupp, Harald T; Elbert, Thomas; Junghöfer, Markus

    2008-06-01

    Recent functional magnetic resonance imaging (fMRI) and event-related brain potential (ERP) studies provide empirical support for the notion that emotional cues guide selective attention. Extending this line of research, whole head magneto-encephalogram (MEG) was measured while participants viewed in separate experimental blocks a continuous stream of either pleasant and neutral or unpleasant and neutral pictures, presented for 330 ms each. Event-related magnetic fields (ERF) were analyzed after intersubject sensor coregistration, complemented by minimum norm estimates (MNE) to explore neural generator sources. Both streams of analysis converge by demonstrating the selective emotion processing in an early (120-170 ms) and a late time interval (220-310 ms). ERF analysis revealed that the polarity of the emotion difference fields was reversed across early and late intervals suggesting distinct patterns of activation in the visual processing stream. Source analysis revealed the amplified processing of emotional pictures in visual processing areas with more pronounced occipito-parieto-temporal activation in the early time interval, and a stronger engagement of more anterior, temporal, regions in the later interval. Confirming previous ERP studies showing facilitated emotion processing, the present data suggest that MEG provides a complementary look at the spread of activation in the visual processing stream.

  9. Design and outcomes of an acoustic data visualization seminar.

    PubMed

    Robinson, Philip W; Pätynen, Jukka; Haapaniemi, Aki; Kuusinen, Antti; Leskinen, Petri; Zan-Bi, Morley; Lokki, Tapio

    2014-01-01

    Recently, the Department of Media Technology at Aalto University offered a seminar entitled Applied Data Analysis and Visualization. The course used spatial impulse response measurements from concert halls as the context to explore high-dimensional data visualization methods. Students were encouraged to represent source and receiver positions, spatial aspects, and temporal development of sound fields, frequency characteristics, and comparisons between halls, using animations and interactive graphics. The primary learning objectives were for the students to translate their skills across disciplines and gain a working understanding of high-dimensional data visualization techniques. Accompanying files present examples of student-generated, animated and interactive visualizations.

  10. Temporal and spatial tuning of dorsal lateral geniculate nucleus neurons in unanesthetized rats

    PubMed Central

    Sriram, Balaji; Meier, Philip M.

    2016-01-01

    Visual response properties of neurons in the dorsolateral geniculate nucleus (dLGN) have been well described in several species, but not in rats. Analysis of responses from the unanesthetized rat dLGN will be needed to develop quantitative models that account for visual behavior of rats. We recorded visual responses from 130 single units in the dLGN of 7 unanesthetized rats. We report the response amplitudes, temporal frequency, and spatial frequency sensitivities in this population of cells. In response to 2-Hz visual stimulation, dLGN cells fired 15.9 ± 11.4 spikes/s (mean ± SD) modulated by 10.7 ± 8.4 spikes/s about the mean. The optimal temporal frequency for full-field stimulation ranged from 5.8 to 19.6 Hz across cells. The temporal high-frequency cutoff ranged from 11.7 to 33.6 Hz. Some cells responded best to low temporal frequency stimulation (low pass), and others were strictly bandpass; most cells fell between these extremes. At 2- to 4-Hz temporal modulation, the spatial frequency of drifting grating that drove cells best ranged from 0.008 to 0.18 cycles per degree (cpd) across cells. The high-frequency cutoff ranged from 0.01 to 1.07 cpd across cells. The majority of cells were driven best by the lowest spatial frequency tested, but many were partially or strictly bandpass. We conclude that single units in the rat dLGN can respond vigorously to temporal modulation up to at least 30 Hz and spatial detail up to 1 cpd. Tuning properties were heterogeneous, but each fell along a continuum; we found no obvious clustering into discrete cell types along these dimensions. PMID:26936980

  11. Database integration for investigative data visualization with the Temporal Analysis System

    NASA Astrophysics Data System (ADS)

    Barth, Stephen W.

    1997-02-01

    This paper describes an effort to provide mechanisms for integration of existing law enforcement databases with the temporal analysis system (TAS) -- an application for analysis and visualization of military intelligence data. Such integration mechanisms are essential for bringing advanced military intelligence data handling software applications to bear on the analysis of data used in criminal investigations. Our approach involved applying a software application for intelligence message handling to the problem of data base conversion. This application provides mechanisms for distributed processing and delivery of converted data records to an end-user application. It also provides a flexible graphic user interface for development and customization in the field.

  12. A Novel Interhemispheric Interaction: Modulation of Neuronal Cooperativity in the Visual Areas

    PubMed Central

    Carmeli, Cristian; Lopez-Aguado, Laura; Schmidt, Kerstin E.; De Feo, Oscar; Innocenti, Giorgio M.

    2007-01-01

    Background The cortical representation of the visual field is split along the vertical midline, with the left and the right hemi-fields projecting to separate hemispheres. Connections between the visual areas of the two hemispheres are abundant near the representation of the visual midline. It was suggested that they re-establish the functional continuity of the visual field by controlling the dynamics of the responses in the two hemispheres. Methods/Principal Findings To understand if and how the interactions between the two hemispheres participate in processing visual stimuli, the synchronization of responses to identical or different moving gratings in the two hemi-fields were studied in anesthetized ferrets. The responses were recorded by multiple electrodes in the primary visual areas and the synchronization of local field potentials across the electrodes were analyzed with a recent method derived from dynamical system theory. Inactivating the visual areas of one hemisphere modulated the synchronization of the stimulus-driven activity in the other hemisphere. The modulation was stimulus-specific and was consistent with the fine morphology of callosal axons in particular with the spatio-temporal pattern of activity that axonal geometry can generate. Conclusions/Significance These findings describe a new kind of interaction between the cerebral hemispheres and highlight the role of axonal geometry in modulating aspects of cortical dynamics responsible for stimulus detection and/or categorization. PMID:18074012

  13. Steady-state multifocal visual evoked potential (ssmfVEP) using dartboard stimulation as a possible tool for objective visual field assessment.

    PubMed

    Horn, Folkert K; Selle, Franziska; Hohberger, Bettina; Kremers, Jan

    2016-02-01

    To investigate whether a conventional, monitor-based multifocal visual evoked potential (mfVEP) system can be used to record steady-state mfVEP (ssmfVEP) in healthy subjects and to study the effects of temporal frequency, electrode configuration and alpha waves. Multifocal pattern reversal VEP measurements were performed at 58 dartboard fields using VEP recording equipment. The responses were measured using m-sequences with four pattern reversals per m-step. Temporal frequencies were varied between 6 and 15 Hz. Recordings were obtained from nine normal subjects with a cross-shaped, four-electrode device (two additional channels were derived). Spectral analyses were performed on the responses at all locations. The signal to noise ratio (SNR) was computed for each response using the signal amplitude at the reversal frequency and the noise at the neighbouring frequencies. Most responses in the ssmfVEP were significantly above noise. The SNR was largest for an 8.6-Hz reversal frequency. The individual alpha electroencephalogram (EEG) did not strongly influence the results. The percentage of the records in which each of the 6 channels had the largest SNR was between 10.0 and 25.2 %. Our results in normal subjects indicate that reliable mfVEP responses can be achieved by steady-state stimulation using a conventional dartboard stimulator and multi-channel electrode device. The ssmfVEP may be useful for objective visual field assessment as spectrum analysis can be used for automated evaluation of responses. The optimal reversal frequency is 8.6 Hz. Alpha waves have only a minor influence on the analysis. Future studies must include comparisons with conventional mfVEP and psychophysical visual field tests.

  14. Data Flow Analysis and Visualization for Spatiotemporal Statistical Data without Trajectory Information.

    PubMed

    Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S

    2018-03-01

    Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.

  15. Foveal and peripheral fields of vision influences perceptual skill in anticipating opponents' attacking position in volleyball.

    PubMed

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2013-09-01

    The importance of perceptual-cognitive expertise in sport has been repeatedly demonstrated. In this study we examined the role of different sources of visual information (i.e., foveal versus peripheral) in anticipating volleyball attack positions. Expert (n = 11), advanced (n = 13) and novice (n = 16) players completed an anticipation task that involved predicting the location of volleyball attacks. Video clips of volleyball attacks (n = 72) were spatially and temporally occluded to provide varying amounts of information to the participant. In addition, participants viewed the attacks under three visual conditions: full vision, foveal vision only, and peripheral vision only. Analysis of variance revealed significant between group differences in prediction accuracy with higher skilled players performing better than lower skilled players. Additionally, we found significant differences between temporal and spatial occlusion conditions. Both of those factors interacted separately, but not combined with expertise. Importantly, for experts the sum of both fields of vision was superior to either source in isolation. Our results suggest different sources of visual information work collectively to facilitate expert anticipation in time-constrained sports and reinforce the complexity of expert perception.

  16. Temporal Profile of Functional Visual Rehabilitative Outcomes Modulated by Transcranial Direct Current Stimulation (tDCS)

    PubMed Central

    Plow, Ela B.; Obretenova, Souzana N.; Jackson, Mary Lou; Merabet, Lotfi B.

    2012-01-01

    Objectives We have previously reported that transcranial direct current stimulation (tDCS) delivered to the occipital cortex enhances visual functional recovery when combined with 3 months of computer-based rehabilitative training in patients with hemianopia. The principal objective of this study was to evaluate the temporal sequence of effects of tDCS on visual recovery as they appear over the course of training and across different indicators of visual function. Methods Primary objective outcome measures were i) shifts in visual field border and ii) stimulus detection accuracy within the affected hemifield. These were compared between patients randomized to either vision restoration therapy (VRT) combined with active tDCS or VRT paired with sham tDCS. Training comprised of 2 half hour sessions, 3 times a week for 3 months. Primary outcome measures were collected at baseline (pretest), monthly interim intervals, and at posttest (3 months). As secondary outcome measures, contrast sensitivity and reading performance were collected at pretest and posttest time-points only. Results Active tDCS combined with VRT accelerated the recovery of stimulus detection as between-group differences appeared within the first month of training. In contrast, a shift in the visual field border was only evident at posttest (after 3 months of training). TDCS did not affect contrast sensitivity or reading performance. Conclusions These results suggest that tDCS may differentially affect the magnitude and sequence of visual recovery in a manner that is task- specific to the type of visual rehabilitative training strategy employed. PMID:22376226

  17. Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography

    NASA Astrophysics Data System (ADS)

    He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang

    2018-03-01

    Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H2+ , the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.

  18. Direct Visualization of Valence Electron Motion Using Strong-Field Photoelectron Holography.

    PubMed

    He, Mingrui; Li, Yang; Zhou, Yueming; Li, Min; Cao, Wei; Lu, Peixiang

    2018-03-30

    Watching the valence electron move in molecules on its intrinsic timescale has been one of the central goals of attosecond science and it requires measurements with subatomic spatial and attosecond temporal resolutions. The time-resolved photoelectron holography in strong-field tunneling ionization holds the promise to access this realm. However, it remains to be a challenging task hitherto. Here we reveal how the information of valence electron motion is encoded in the hologram of the photoelectron momentum distribution (PEMD) and develop a novel approach of retrieval. As a demonstration, applying it to the PEMDs obtained by solving the time-dependent Schrödinger equation for the prototypical molecule H_{2}^{+}, the attosecond charge migration is directly visualized with picometer spatial and attosecond temporal resolutions. Our method represents a general approach for monitoring attosecond charge migration in more complex polyatomic and biological molecules, which is one of the central tasks in the newly emerging attosecond chemistry.

  19. Measuring temporal summation in visual detection with a single-photon source.

    PubMed

    Holmes, Rebecca; Victora, Michelle; Wang, Ranxiao Frances; Kwiat, Paul G

    2017-11-01

    Temporal summation is an important feature of the visual system which combines visual signals that arrive at different times. Previous research estimated complete summation to last for 100ms for stimuli judged "just detectable." We measured the full range of temporal summation for much weaker stimuli using a new paradigm and a novel light source, developed in the field of quantum optics for generating small numbers of photons with precise timing characteristics and reduced variance in photon number. Dark-adapted participants judged whether a light was presented to the left or right of their fixation in each trial. In Experiment 1, stimuli contained a stream of photons delivered at a constant rate while the duration was systematically varied. Accuracy should increase with duration as long as the later photons can be integrated with the proceeding ones into a single signal. The temporal integration window was estimated as the point that performance no longer improved, and was found to be 650ms on average. In Experiment 2, the duration of the visual stimuli was kept short (100ms or <30ms) while the number of photons was varied to explore the efficiency of summation over the integration window compared to Experiment 1. There was some indication that temporal summation remains efficient over the integration window, although there is variation between individuals. The relatively long integration window measured in this study may be relevant to studies of the absolute visual threshold, i.e., tests of single-photon vision, where "single" photons should be separated by greater than the integration window to avoid summation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Predictive Feedback Can Account for Biphasic Responses in the Lateral Geniculate Nucleus

    PubMed Central

    Jehee, Janneke F. M.; Ballard, Dana H.

    2009-01-01

    Biphasic neural response properties, where the optimal stimulus for driving a neural response changes from one stimulus pattern to the opposite stimulus pattern over short periods of time, have been described in several visual areas, including lateral geniculate nucleus (LGN), primary visual cortex (V1), and middle temporal area (MT). We describe a hierarchical model of predictive coding and simulations that capture these temporal variations in neuronal response properties. We focus on the LGN-V1 circuit and find that after training on natural images the model exhibits the brain's LGN-V1 connectivity structure, in which the structure of V1 receptive fields is linked to the spatial alignment and properties of center-surround cells in the LGN. In addition, the spatio-temporal response profile of LGN model neurons is biphasic in structure, resembling the biphasic response structure of neurons in cat LGN. Moreover, the model displays a specific pattern of influence of feedback, where LGN receptive fields that are aligned over a simple cell receptive field zone of the same polarity decrease their responses while neurons of opposite polarity increase their responses with feedback. This phase-reversed pattern of influence was recently observed in neurophysiology. These results corroborate the idea that predictive feedback is a general coding strategy in the brain. PMID:19412529

  1. Peripheral myopization and visual performance with experimental rigid gas permeable and soft contact lens design.

    PubMed

    Pauné, J; Queiros, A; Quevedo, L; Neves, H; Lopes-Ferreira, D; González-Méijome, J M

    2014-12-01

    To evaluate the performance of two experimental contact lenses (CL) designed to induce relative peripheral myopic defocus in myopic eyes. Ten right eyes of 10 subjects were fitted with three different CL: a soft experimental lens (ExpSCL), a rigid gas permeable experimental lens (ExpRGP) and a standard RGP lens made of the same material (StdRGP). Central and peripheral refraction was measured using a Grand Seiko open-field autorefractometer across the central 60° of the horizontal visual field. Ocular aberrations were measured with a Hartman-Shack aberrometer, and monocular contrast sensitivity function (CSF) was measured with a VCTS6500 without and with the three contact lenses. Both experimental lenses were able to increase significantly the relative peripheral myopic defocus up to -0.50 D in the nasal field and -1.00 D in the temporal field (p<0.05). The ExpRGP induced a significantly higher myopic defocus in the temporal field compared to the ExpSCL. ExpSCL induced significantly lower levels of Spherical-like HOA than ExpRGP for the 5mm pupil size (p<0.05). Both experimental lenses kept CSF within normal limits without any statistically significant change from baseline (p>0.05). RGP lens design seems to be more effective to induce a significant myopic change in the relative peripheral refractive error. Both lenses preserve a good visual performance. The worsened optical quality observed in ExpRGP was due to an increased coma-like and spherical-like HOA. However, no impact on the visual quality as measured by CSF was observed. Copyright © 2014 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  2. Peripheral refraction in 7- and 14-year-old children in central China: the Anyang Childhood Eye Study.

    PubMed

    Li, Shi-Ming; Li, Si-Yuan; Liu, Luo-Ru; Zhou, Yue-Hua; Yang, Zhou; Kang, Meng-Tian; Li, He; Yang, Xiao-Yuan; Wang, Yi-Peng; Zhan, Si-Yan; Mitchell, Paul; Wang, Ningli; Atchison, David A

    2015-05-01

    To determine the distribution of peripheral refraction, including astigmatism, in 7- and 14-year-old Chinese children. 2134 7-year-old and 1780 14-year-old children were measured with cycloplegic central and horizontal peripheral refraction (15° and 30° at temporal and nasal visual fields). 7- and 14-year-old children included 9 and 594, respectively, with moderate and high myopia (≤-3.0 D), 259 and 831 with low myopia (-2.99 to -0.5 D), 1207 and 305 with emmetropia (-0.49 to +1.0 D), and 659 and 50 with hyperopia (>1.0 D), respectively. Myopic children had relative peripheral hyperopia while hyperopic and emmetropic children had relative peripheral myopia, with greater changes in relative peripheral refraction occurring in the nasal than the temporal visual field. The older group had the greater relative peripheral hyperopia and higher peripheral J180. Both age groups showed positive slopes of J45 across the visual field, with greater slopes in the older group. Myopic children in mainland China have relative peripheral hyperopia while hyperopic and emmetropic children have relative peripheral myopia. Significant differences exist between 7- and 14-year-old children, with the latter showing more relative peripheral hyperopia, greater rate of change in J45 across the visual field, and higher peripheral J180. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  3. Optical coherence tomography detects characteristic retinal nerve fiber layer thickness corresponding to band atrophy of the optic discs.

    PubMed

    Kanamori, Akiyasu; Nakamura, Makoto; Matsui, Noriko; Nagai, Azusa; Nakanishi, Yoriko; Kusuhara, Sentaro; Yamada, Yuko; Negi, Akira

    2004-12-01

    To analyze retinal nerve fiber layer (RNFL) thickness in eyes with band atrophy by use of optical coherence tomography (OCT) and to evaluate the ability of OCT to detect this characteristic pattern of RNFL loss. Cross-sectional, retrospective study. Thirty-four eyes of 18 patients with bitemporal hemianopia caused by optic chiasm compression by chiasmal tumors were studied. All eyes were divided into 3 groups according to visual field loss grading after Goldmann perimetry. Retinal nerve fiber layer thickness measurements with OCT. Retinal nerve fiber layer thickness around the optic disc was measured by OCT (3.4-mm diameter circle). Calculation of the changes in OCT parameters, including the horizontal (nasal + temporal quadrant RNFL thickness) and vertical values (superior + inferior quadrant RNFL thickness) was based on data from 160 normal eyes. Comparison between the 3 visual field grading groups was done with the analysis of variance test. The receiver operating characteristic (ROC) curve for the horizontal and vertical value were calculated, and the areas under the curve (AUC) were compared. Retinal nerve fiber layer thickness in eyes with band atrophy decreased in all OCT parameters. The reduction rate in average and temporal RNFL thickness and horizontal value was correlated with visual field grading. The AUC of horizontal value was 0.970+/-0.011, which was significantly different from AUC of vertical value (0.903+/-0.022). The degree of RNFL thickness reduction correlated with that of visual field defects. Optical coherence tomography was able to identify the characteristic pattern of RNFL loss in these eyes.

  4. Emulating the visual receptive-field properties of MST neurons with a template model of heading estimation

    NASA Technical Reports Server (NTRS)

    Perrone, J. A.; Stone, L. S.

    1998-01-01

    We have proposed previously a computational neural-network model by which the complex patterns of retinal image motion generated during locomotion (optic flow) can be processed by specialized detectors acting as templates for specific instances of self-motion. The detectors in this template model respond to global optic flow by sampling image motion over a large portion of the visual field through networks of local motion sensors with properties similar to those of neurons found in the middle temporal (MT) area of primate extrastriate visual cortex. These detectors, arranged within cortical-like maps, were designed to extract self-translation (heading) and self-rotation, as well as the scene layout (relative distances) ahead of a moving observer. We then postulated that heading from optic flow is directly encoded by individual neurons acting as heading detectors within the medial superior temporal (MST) area. Others have questioned whether individual MST neurons can perform this function because some of their receptive-field properties seem inconsistent with this role. To resolve this issue, we systematically compared MST responses with those of detectors from two different configurations of the model under matched stimulus conditions. We found that the characteristic physiological properties of MST neurons can be explained by the template model. We conclude that MST neurons are well suited to support self-motion estimation via a direct encoding of heading and that the template model provides an explicit set of testable hypotheses that can guide future exploration of MST and adjacent areas within the superior temporal sulcus.

  5. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex

    PubMed Central

    Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272

  6. Exploration of spatio-temporal patterns of students' movement in field trip by visualizing the log data

    NASA Astrophysics Data System (ADS)

    Cho, Nahye; Kang, Youngok

    2018-05-01

    A numerous log data in addition to user input data are being generated as mobile and web users continue to increase recently, and the studies in order to explore the patterns and meanings of various movement activities by making use of these log data are also rising rapidly. On the other hand, in the field of education, people have recognized the importance of field trip as the creative education is highlighted. Also, the examples which utilize the mobile devices in the field trip in accordance to the development of information technology are growing. In this study, we try to explore the patterns of student's activity by visualizing the log data generated from high school students' field trip with mobile device.

  7. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys

    PubMed Central

    Liu, Bing

    2017-01-01

    Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348

  8. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  9. Long-term study of patients with congenital pit of the optic nerve and persistent macular detachment.

    PubMed

    Theodossiadis, G P; Panopoulos, M; Kollia, A K; Georgopoulos, G

    1992-08-01

    During the period 1970-87 we evaluated the changes of the optic disc, peripapillary area, detached macula and visual acuity in 16 cases with congenital pit of the optic nerve and macular detachment. The study revealed in 9 of the 16 cases (56%) an increase of the dimension of the pit or changes in its color, findings which were directly related to the duration of the macular detachment. Chorioretinal scarring, pigment migration, or both, were also noted mainly at the temporal margin of optic disc. In 5/16 cases we found during the follow-up an extension of macular elevation. In altogether 10 out of 16 cases the retinal elevation covered the larger portion of the mid-periphery temporally. In 7/16 cases the final visual acuity remained unchanged, in 9/16 cases deteriorated. The difference, however, in the latter 9 cases between initial and final visual acuity was negligible. During the follow-up period deterioration of the visual fields was also noted.

  10. Evaluation of peripheral binocular visual field in patients with glaucoma: a pilot study

    PubMed Central

    Ana, Banc; Cristina, Stan; Dorin, Chiselita

    2016-01-01

    Objective: The objective of this study was to evaluate the peripheral binocular visual field (PBVF) in patients with glaucoma using the threshold strategy of Humphrey Field Analyzer. Methods: We conducted a case-control pilot study in which we enrolled 59 patients with glaucoma and 20 controls. All participants were evaluated using a custom PBVF test and central 24° monocular visual field tests for each eye using the threshold strategy. The central binocular visual field (CBVF) was predicted from the monocular tests using the most sensitive point at each field location. The glaucoma patients were grouped according to Hodapp classification and age. The PBVF was compared to controls and the relationship between the PBVF and CBVF was tested. Results: The areas of frame-induced artefacts were determined (over 50° in each temporal field, 24° superiorly and 45° inferiorly) and excluded from interpretation. The patients presented a statistically significant generalized decrease of the peripheral retinal sensitivity compared to controls for Hodapp initial stage - groups aged 50-59 (t = 11.93 > 2.06; p < 0.05) and 60-69 (t = 7.55 > 2.06; p < 0.05). For the initial Hodapp stage there was no significant relationship between PBVF and CBVF (r = 0.39). For the moderate and advanced Hodapp stages, the interpretation of data was done separately for each patient. Conclusions: This pilot study suggests that glaucoma patients present a decrease of PBVF compared to controls and CBVF cannot predict the PBVF in glaucoma. Abbreviations: CBVF = central binocular visual field, PBVF = peripheral binocular visual field, MD = mean deviation PMID:27220228

  11. [Atypical optic neuritis in systemic lupus erythematosus (SLE)].

    PubMed

    Eckstein, A; Kötter, I; Wilhelm, H

    1995-11-01

    A 67-year-old woman experienced acute unilateral visual loss accompanied by pain with eye movements. There was a marked relative afferent pupillary defect and a nerve fiber bundle defect in the upper half of the visual field. Optic discs were normal. After 4 days vision worsened to motion detection and only a temporal island was left in the visual field. The optic disc margin was blurred. Since thirty years she had been suffering from renal insufficiency. Immunoserologic examination revealed elevated ANA and DS-DNA antibody titers. An optic neuritis in systemic lupus erythematosus was diagnosed, which is called atopic, because of its association to a systemic disease and the old age of the patient. The patient was treated with 100 mg prednisolone/day, slowly tapered. Within 6 weeks visual acuity improved to 0.6 and visual field normalized except for a small nerve fiber bundle defect. Autoimmune optic neuritis often responds to treatment with corticosteroids. Early onset of treatment is important. Immunopathologic examinations are an important diagnostic tool in atopic optic neuritis. Their results may even have consequences for the treatment of the underlying disease.

  12. Visuomotor adaptation to a visual rotation is gravity dependent.

    PubMed

    Toma, Simone; Sciutti, Alessandra; Papaxanthis, Charalambos; Pozzo, Thierry

    2015-03-15

    Humans perform vertical and horizontal arm motions with different temporal patterns. The specific velocity profiles are chosen by the central nervous system by integrating the gravitational force field to minimize energy expenditure. However, what happens when a visuomotor rotation is applied, so that a motion performed in the horizontal plane is perceived as vertical? We investigated the dynamic of the adaptation of the spatial and temporal properties of a pointing motion during prolonged exposure to a 90° visuomotor rotation, where a horizontal movement was associated with a vertical visual feedback. We found that participants immediately adapted the spatial parameters of motion to the conflicting visual scene in order to keep their arm trajectory straight. In contrast, the initial symmetric velocity profiles specific for a horizontal motion were progressively modified during the conflict exposure, becoming more asymmetric and similar to those appropriate for a vertical motion. Importantly, this visual effect that increased with repetitions was not followed by a consistent aftereffect when the conflicting visual feedback was absent (catch and washout trials). In a control experiment we demonstrated that an intrinsic representation of the temporal structure of perceived vertical motions could provide the error signal allowing for this progressive adaptation of motion timing. These findings suggest that gravity strongly constrains motor learning and the reweighting process between visual and proprioceptive sensory inputs, leading to the selection of a motor plan that is suboptimal in terms of energy expenditure. Copyright © 2015 the American Physiological Society.

  13. Imaging multi-scale dynamics in vivo with spiral volumetric optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Fehm, Thomas F.; Ford, Steven J.; Gottschalk, Sven; Razansky, Daniel

    2017-03-01

    Imaging dynamics in living organisms is essential for the understanding of biological complexity. While multiple imaging modalities are often required to cover both microscopic and macroscopic spatial scales, dynamic phenomena may also extend over different temporal scales, necessitating the use of different imaging technologies based on the trade-off between temporal resolution and effective field of view. Optoacoustic (photoacoustic) imaging has been shown to offer the exclusive capability to link multiple spatial scales ranging from organelles to entire organs of small animals. Yet, efficient visualization of multi-scale dynamics remained difficult with state-of-the-art systems due to inefficient trade-offs between image acquisition and effective field of view. Herein, we introduce a spiral volumetric optoacoustic tomography (SVOT) technique that provides spectrally-enriched high-resolution optical absorption contrast across multiple spatio-temporal scales. We demonstrate that SVOT can be used to monitor various in vivo dynamics, from video-rate volumetric visualization of cardiac-associated motion in whole organs to high-resolution imaging of pharmacokinetics in larger regions. The multi-scale dynamic imaging capability thus emerges as a powerful and unique feature of the optoacoustic technology that adds to the multiple advantages of this technology for structural, functional and molecular imaging.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less

  15. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    PubMed

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.

  16. Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.

    PubMed

    McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth

    2015-07-15

    A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.

  17. Experimental Investigation of the Flow Structure over a Delta Wing Via Flow Visualization Methods.

    PubMed

    Shen, Lu; Chen, Zong-Nan; Wen, Chihyung

    2018-04-23

    It is well known that the flow field over a delta wing is dominated by a pair of counter rotating leading edge vortices (LEV). However, their mechanism is not well understood. The flow visualization technique is a promising non-intrusive method to illustrate the complex flow field spatially and temporally. A basic flow visualization setup consists of a high-powered laser and optic lenses to generate the laser sheet, a camera, a tracer particle generator, and a data processor. The wind tunnel setup, the specifications of devices involved, and the corresponding parameter settings are dependent on the flow features to be obtained. Normal smoke wire flow visualization uses a smoke wire to demonstrate the flow streaklines. However, the performance of this method is limited by poor spatial resolution when it is conducted in a complex flow field. Therefore, an improved smoke flow visualization technique has been developed. This technique illustrates the large-scale global LEV flow field and the small-scale shear layer flow structure at the same time, providing a valuable reference for later detailed particle image velocimetry (PIV) measurement. In this paper, the application of the improved smoke flow visualization and PIV measurement to study the unsteady flow phenomena over a delta wing is demonstrated. The procedure and cautions for conducting the experiment are listed, including wind tunnel setup, data acquisition, and data processing. The representative results show that these two flow visualization methods are effective techniques for investigating the three-dimensional flow field qualitatively and quantitatively.

  18. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of "Green" Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex.

    PubMed

    Rauscher, Franziska G; Plant, Gordon T; James-Galton, Merle; Barbur, John L

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d'Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength ("red") and middle wavelength ("green") regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient's results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both "red/green" and "yellow/blue" directions in colour space, the subject's lower left quadrant showed a marked asymmetry in "red/green" thresholds with the greatest loss of sensitivity towards the "green" region of the spectrum locus. This spatially localized asymmetric loss of "green" but not "red" sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent.

  19. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  20. Emerging feed-forward inhibition allows the robust formation of direction selectivity in the developing ferret visual cortex

    PubMed Central

    Escobar, Gina M.; Maffei, Arianna; Miller, Paul

    2014-01-01

    The computation of direction selectivity requires that a cell respond to joint spatial and temporal characteristics of the stimulus that cannot be separated into independent components. Direction selectivity in ferret visual cortex is not present at the time of eye opening but instead develops in the days and weeks following eye opening in a process that requires visual experience with moving stimuli. Classic Hebbian or spike timing-dependent modification of excitatory feed-forward synaptic inputs is unable to produce direction-selective cells from unselective or weakly directionally biased initial conditions because inputs eventually grow so strong that they can independently drive cortical neurons, violating the joint spatial-temporal activation requirement. Furthermore, without some form of synaptic competition, cells cannot develop direction selectivity in response to training with bidirectional stimulation, as cells in ferret visual cortex do. We show that imposing a maximum lateral geniculate nucleus (LGN)-to-cortex synaptic weight allows neurons to develop direction-selective responses that maintain the requirement for joint spatial and temporal activation. We demonstrate that a novel form of inhibitory plasticity, postsynaptic activity-dependent long-term potentiation of inhibition (POSD-LTPi), which operates in the developing cortex at the time of eye opening, can provide synaptic competition and enables robust development of direction-selective receptive fields with unidirectional or bidirectional stimulation. We propose a general model of the development of spatiotemporal receptive fields that consists of two phases: an experience-independent establishment of initial biases, followed by an experience-dependent amplification or modification of these biases via correlation-based plasticity of excitatory inputs that compete against gradually increasing feed-forward inhibition. PMID:24598528

  1. Cortical metabolic activity matches the pattern of visual suppression in strabismus.

    PubMed

    Adams, Daniel L; Economides, John R; Sincich, Lawrence C; Horton, Jonathan C

    2013-02-27

    When an eye becomes deviated in early childhood, a person does not experience double vision, although the globes are aimed at different targets. The extra image is prevented from reaching perception in subjects with alternating exotropia by suppression of each eye's peripheral temporal retina. To test the impact of visual suppression on neuronal activity in primary (striate) visual cortex, the pattern of cytochrome oxidase (CO) staining was examined in four macaques raised with exotropia by disinserting the medial rectus muscles shortly following birth. No ocular dominance columns were visible in opercular cortex, where the central visual field is represented, indicating that signals coming from the central retina in each eye were perceived. However, the border strips at the edges of ocular dominance columns appeared pale, reflecting a loss of activity in binocular cells from disruption of fusion. In calcarine cortex, where the peripheral visual field is represented, there were alternating pale and dark bands resembling ocular dominance columns. To interpret the CO staining pattern, [(3)H]proline was injected into the right eye in two monkeys. In the right calcarine cortex, the pale CO columns matched the labeled proline columns of the right eye. In the left calcarine cortex, the pale CO columns overlapped the unlabeled columns of the left eye in the autoradiograph. Therefore, metabolic activity was reduced in the ipsilateral eye's ocular dominance columns which serve peripheral temporal retina, in a fashion consistent with the topographic organization of suppression scotomas in humans with exotropia.

  2. Modeling a space-variant cortical representation for apparent motion.

    PubMed

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  3. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  4. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  5. Neurofilament protein defines regional patterns of cortical organization in the macaque monkey visual system: a quantitative immunohistochemical analysis

    NASA Technical Reports Server (NTRS)

    Hof, P. R.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)

    1995-01-01

    Visual function in monkeys is subserved at the cortical level by a large number of areas defined by their specific physiological properties and connectivity patterns. For most of these cortical fields, a precise index of their degree of anatomical specialization has not yet been defined, although many regional patterns have been described using Nissl or myelin stains. In the present study, an attempt has been made to elucidate the regional characteristics, and to varying degrees boundaries, of several visual cortical areas in the macaque monkey using an antibody to neurofilament protein (SMI32). This antibody labels a subset of pyramidal neurons with highly specific regional and laminar distribution patterns in the cerebral cortex. Based on the staining patterns and regional quantitative analysis, as many as 28 cortical fields were reliably identified. Each field had a homogeneous distribution of labeled neurons, except area V1, where increases in layer IVB cell and in Meynert cell counts paralleled the increase in the degree of eccentricity in the visual field representation. Within the occipitotemporal pathway, areas V3 and V4 and fields in the inferior temporal cortex were characterized by a distinct population of neurofilament-rich neurons in layers II-IIIa, whereas areas located in the parietal cortex and part of the occipitoparietal pathway had a consistent population of large labeled neurons in layer Va. The mediotemporal areas MT and MST displayed a distinct population of densely labeled neurons in layer VI. Quantitative analysis of the laminar distribution of the labeled neurons demonstrated that the visual cortical areas could be grouped in four hierarchical levels based on the ratio of neuron counts between infragranular and supragranular layers, with the first (areas V1, V2, V3, and V3A) and third (temporal and parietal regions) levels characterized by low ratios and the second (areas MT, MST, and V4) and fourth (frontal regions) levels characterized by high to very high ratios. Such density trends may correspond to differential representation of corticocortically (and corticosubcortically) projecting neurons at several functional steps in the integration of the visual stimuli. In this context, it is possible that neurofilament protein is crucial for the unique capacity of certain subsets of neurons to perform the highly precise mapping functions of the monkey visual system.

  6. Delayed visual maturation in infants: a disorder of figure-ground separation?

    PubMed

    Harris, C M; Kriss, A; Shawkat, F; Taylor, D; Russell-Eggitt, I

    1996-01-01

    Delayed visual maturation (DVM) is characterised by visual unresponsiveness in early infancy, which subsequently improves spontaneously to normal levels. We studied the optokinetic response and recorded pattern reversal VEPs in six infants with DVM (aged 2-4 months) when they were at the stage of complete visual unresponsiveness. Although no saccades or visual tracking with the eyes or head could be elicited to visual objects, a normal full-field rapid buildup OKN response occurred when viewing biocularly or during monocular stimulation in the temporo-nasal direction of the viewing eye. Almost no monocular OKN could be elicited in the naso-temporal direction, which was significantly poorer than normal age-matched infants. No OKN quick phases were missed, and there were no other signs of "ocular motor apraxia." VEPs were normal in amplitude and latency for age. It appears, therefore, that infants with DVM are delayed in orienting to local regions of the visual field, but can respond to full-field motion. The presence of normal OKN quick-phases and slow-phases suggests normal brain stem function, and the presence of normal pattern VEPs suggests a normal retino-geniculo-striate pathway. These oculomotor and electrophysiological findings suggest delayed development of extra-striate cortical structures, possibly involving either an abnormality in figure-ground segregation or in attentional pathways.

  7. Interocular asymmetry of the visual field defects in newly diagnosed normal-tension glaucoma, primary open-angle glaucoma, and chronic angle-closure glaucoma.

    PubMed

    Huang, Ping; Shi, Yan; Wang, Xin; Liu, Mugen; Zhang, Chun

    2014-09-01

    To compare the interocular asymmetry of visual field loss in newly diagnosed normal-tension glaucoma (NTG), primary open-angle glaucoma (POAG), and chronic angle-closure glaucoma (CACG) patients. Visual field results of 117 newly diagnosed, treatment-naive glaucoma patients (42 NTG, 38 POAG, and 37 CACG) were studied retrospectively. The following 3 visual field defect parameters were used to evaluate the interocular asymmetry: (1) global indices; (2) local mean deviations (MDs) of 6 predefined visual field areas; and (3) stage designated by glaucoma staging system 2. The differences of the above parameters between the trial eye (the eye with greater MDs) and the fellow eye in each subject were defined as interocular asymmetry scores. Interocular asymmetry of visual field loss was presented in all the 3 groups (all P<0.05). CACG group had greater total MD interocular asymmetry score compared with the NTG and POAG groups (among groups, P=0.008; NTG vs. CACG, P=0.005; POAG vs. CACG, P=0.009). CACG also presented with significantly higher local MD interocular asymmetry scores at central, inferior, and temporal areas compared with those of the POAG group and at inferior area compared with that of NTG group. No significant difference in either total or local MDs was detected between NTG and POAG (all P>0.05). Interocular asymmetry scores of glaucoma staging system 2 had no significant difference among the 3 groups (P=0.068). All CACG, POAG, and NTG groups presented with interocular asymmetric visual field loss at the time of diagnosis. CACG had greater interocular asymmetry compared with NTG and POAG. No significant interocular asymmetry difference was observed between NTG and POAG.

  8. Game theoretic approach for cooperative feature extraction in camera networks

    NASA Astrophysics Data System (ADS)

    Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco

    2016-07-01

    Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.

  9. Cortical activation during Braille reading is influenced by early visual experience in subjects with severe visual disability: a correlational fMRI study.

    PubMed

    Melzer, P; Morgan, V L; Pickens, D R; Price, R R; Wall, R S; Ebner, F F

    2001-11-01

    Functional magnetic resonance imaging was performed on blind adults resting and reading Braille. The strongest activation was found in primary somatic sensory/motor cortex on both cortical hemispheres. Additional foci of activation were situated in the parietal, temporal, and occipital lobes where visual information is processed in sighted persons. The regions were differentiated most in the correlation of their time courses of activation with resting and reading. Differences in magnitude and expanse of activation were substantially less significant. Among the traditionally visual areas, the strength of correlation was greatest in posterior parietal cortex and moderate in occipitotemporal, lateral occipital, and primary visual cortex. It was low in secondary visual cortex as well as in dorsal and ventral inferior temporal cortex and posterior middle temporal cortex. Visual experience increased the strength of correlation in all regions except dorsal inferior temporal and posterior parietal cortex. The greatest statistically significant increase, i.e., approximately 30%, was in ventral inferior temporal and posterior middle temporal cortex. In these regions, words are analyzed semantically, which may be facilitated by visual experience. In contrast, visual experience resulted in a slight, insignificant diminution of the strength of correlation in dorsal inferior temporal cortex where language is analyzed phonetically. These findings affirm that posterior temporal regions are engaged in the processing of written language. Moreover, they suggest that this function is modified by early visual experience. Furthermore, visual experience significantly strengthened the correlation of activation and Braille reading in occipital regions traditionally involved in the processing of visual features and object recognition suggesting a role for visual imagery. Copyright 2001 Wiley-Liss, Inc.

  10. Temporal Ventriloquism Reveals Intact Audiovisual Temporal Integration in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-02-01

    We have shown previously that amblyopia involves impaired detection of asynchrony between auditory and visual events. To distinguish whether this impairment represents a defect in temporal integration or nonintegrative multisensory processing (e.g., cross-modal matching), we used the temporal ventriloquism effect in which visual temporal order judgment (TOJ) is normally enhanced by a lagging auditory click. Participants with amblyopia (n = 9) and normally sighted controls (n = 9) performed a visual TOJ task. Pairs of clicks accompanied the two lights such that the first click preceded the first light, or second click lagged the second light by 100, 200, or 450 ms. Baseline audiovisual synchrony and visual-only conditions also were tested. Within both groups, just noticeable differences for the visual TOJ task were significantly reduced compared with baseline in the 100- and 200-ms click lag conditions. Within the amblyopia group, poorer stereo acuity and poorer visual acuity in the amblyopic eye were significantly associated with greater enhancement in visual TOJ performance in the 200-ms click lag condition. Audiovisual temporal integration is intact in amblyopia, as indicated by perceptual enhancement in the temporal ventriloquism effect. Furthermore, poorer stereo acuity and poorer visual acuity in the amblyopic eye are associated with a widened temporal binding window for the effect. These findings suggest that previously reported abnormalities in audiovisual multisensory processing may result from impaired cross-modal matching rather than a diminished capacity for temporal audiovisual integration.

  11. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  12. Human rather than ape-like orbital morphology allows much greater lateral visual field expansion with eye abduction

    PubMed Central

    Denion, Eric; Hitier, Martin; Levieil, Eric; Mouriaux, Frédéric

    2015-01-01

    While convergent, the human orbit differs from that of non-human apes in that its lateral orbital margin is significantly more rearward. This rearward position does not obstruct the additional visual field gained through eye motion. This additional visual field is therefore considered to be wider in humans than in non-human apes. A mathematical model was designed to quantify this difference. The mathematical model is based on published computed tomography data in the human neuro-ocular plane (NOP) and on additional anatomical data from 100 human skulls and 120 non-human ape skulls (30 gibbons; 30 chimpanzees / bonobos; 30 orangutans; 30 gorillas). It is used to calculate temporal visual field eccentricity values in the NOP first in the primary position of gaze then for any eyeball rotation value in abduction up to 45° and any lateral orbital margin position between 85° and 115° relative to the sagittal plane. By varying the lateral orbital margin position, the human orbit can be made “non-human ape-like”. In the Pan-like orbit, the orbital margin position (98.7°) was closest to the human orbit (107.1°). This modest 8.4° difference resulted in a large 21.1° difference in maximum lateral visual field eccentricity with eyeball abduction (Pan-like: 115°; human: 136.1°). PMID:26190625

  13. Inhibition to excitation ratio regulates visual system responses and behavior in vivo.

    PubMed

    Shen, Wanhua; McKeown, Caroline R; Demas, James A; Cline, Hollis T

    2011-11-01

    The balance of inhibitory to excitatory (I/E) synaptic inputs is thought to control information processing and behavioral output of the central nervous system. We sought to test the effects of the decreased or increased I/E ratio on visual circuit function and visually guided behavior in Xenopus tadpoles. We selectively decreased inhibitory synaptic transmission in optic tectal neurons by knocking down the γ2 subunit of the GABA(A) receptors (GABA(A)R) using antisense morpholino oligonucleotides or by expressing a peptide corresponding to an intracellular loop of the γ2 subunit, called ICL, which interferes with anchoring GABA(A)R at synapses. Recordings of miniature inhibitory postsynaptic currents (mIPSCs) and miniature excitatory PSCs (mEPSCs) showed that these treatments decreased the frequency of mIPSCs compared with control tectal neurons without affecting mEPSC frequency, resulting in an ∼50% decrease in the ratio of I/E synaptic input. ICL expression and γ2-subunit knockdown also decreased the ratio of optic nerve-evoked synaptic I/E responses. We recorded visually evoked responses from optic tectal neurons, in which the synaptic I/E ratio was decreased. Decreasing the synaptic I/E ratio in tectal neurons increased the variance of first spike latency in response to full-field visual stimulation, increased recurrent activity in the tectal circuit, enlarged spatial receptive fields, and lengthened the temporal integration window. We used the benzodiazepine, diazepam (DZ), to increase inhibitory synaptic activity. DZ increased optic nerve-evoked inhibitory transmission but did not affect evoked excitatory currents, resulting in an increase in the I/E ratio of ∼30%. Increasing the I/E ratio with DZ decreased the variance of first spike latency, decreased spatial receptive field size, and lengthened temporal receptive fields. Sequential recordings of spikes and excitatory and inhibitory synaptic inputs to the same visual stimuli demonstrated that decreasing or increasing the I/E ratio disrupted input/output relations. We assessed the effect of an altered I/E ratio on a visually guided behavior that requires the optic tectum. Increasing and decreasing I/E in tectal neurons blocked the tectally mediated visual avoidance behavior. Because ICL expression, γ2-subunit knockdown, and DZ did not directly affect excitatory synaptic transmission, we interpret the results of our study as evidence that partially decreasing or increasing the ratio of I/E disrupts several measures of visual system information processing and visually guided behavior in an intact vertebrate.

  14. Temporally Scalable Visual SLAM using a Reduced Pose Graph

    DTIC Science & Technology

    2012-05-25

    m b r i d g e , m a 0 213 9 u s a — w w w. c s a i l . m i t . e d u MIT-CSAIL-TR-2012-013 May 25, 2012 Temporally Scalable Visual SLAM using a...00-00-2012 4. TITLE AND SUBTITLE Temporally Scalable Visual SLAM using a Reduced Pose Graph 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM...demonstrate a system for temporally scalable visual SLAM using a reduced pose graph representation. Unlike previous visual SLAM approaches that use

  15. Haltere mechanosensory influence on tethered flight behavior in Drosophila.

    PubMed

    Mureli, Shwetha; Fox, Jessica L

    2015-08-01

    In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.

  16. Visually defining and querying consistent multi-granular clinical temporal abstractions.

    PubMed

    Combi, Carlo; Oliboni, Barbara

    2012-02-01

    The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on the component abstractions. Moreover, we propose a visual query language where different temporal abstractions can be composed to build complex queries: temporal abstractions are visually connected through the usual logical connectives AND, OR, and NOT. The proposed visual language allows one to simply define temporal abstractions by using intuitive metaphors, and to specify temporal intervals related to abstractions by using different temporal granularities. The physician can interact with the designed and implemented tool by point-and-click selections, and can visually compose queries involving several temporal abstractions. The evaluation of the proposed granularity-related metaphors consisted in two parts: (i) solving 30 interpretation exercises by choosing the correct interpretation of a given screenshot representing a possible scenario, and (ii) solving a complex exercise, by visually specifying through the interface a scenario described only in natural language. The exercises were done by 13 subjects. The percentage of correct answers to the interpretation exercises were slightly different with respect to the considered metaphors (54.4--striped wall, 73.3--plastered wall, 61--brick wall, and 61--no wall), but post hoc statistical analysis on means confirmed that differences were not statistically significant. The result of the user's satisfaction questionnaire related to the evaluation of the proposed granularity-related metaphors ratified that there are no preferences for one of them. The evaluation of the proposed logical notation consisted in two parts: (i) solving five interpretation exercises provided by a screenshot representing a possible scenario and by three different possible interpretations, of which only one was correct, and (ii) solving five exercises, by visually defining through the interface a scenario described only in natural language. Exercises had an increasing difficulty. The evaluation involved a total of 31 subjects. Results related to this evaluation phase confirmed us about the soundness of the proposed solution even in comparison with a well known proposal based on a tabular query form (the only significant difference is that our proposal requires more time for the training phase: 21 min versus 14 min). In this work we have considered the issue of visually composing and querying temporal clinical patient data. In this context we have proposed a visual framework for the specification of consistent temporal abstractions with different granularities and for the visual composition of different temporal abstractions to build (possibly) complex queries on clinical databases. A new algorithm has been proposed to check the consistency of the specified granular abstraction. From the evaluation of the proposed metaphors and interfaces and from the comparison of the visual query language with a well known visual method for boolean queries, the soundness of the overall system has been confirmed; moreover, pros and cons and possible improvements emerged from the comparison of different visual metaphors and solutions. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Spectral Signatures of Feedforward and Recurrent Circuitry in Monkey Area MT.

    PubMed

    Solomon, Selina S; Morley, John W; Solomon, Samuel G

    2017-05-01

    Recordings of local field potential (LFP) in the visual cortex can show rhythmic activity at gamma frequencies (30-100 Hz). While the gamma rhythms in the primary visual cortex have been well studied, the structural and functional characteristics of gamma rhythms in extrastriate visual cortex are less clear. Here, we studied the spatial distribution and functional specificity of gamma rhythms in extrastriate middle temporal (MT) area of visual cortex in marmoset monkeys. We found that moving gratings induced narrowband gamma rhythms across cortical layers that were coherent across much of area MT. Moving dot fields instead induced a broadband increase in LFP in middle and upper layers, with weaker narrowband gamma rhythms in deeper layers. The stimulus dependence of LFP response in middle and upper layers of area MT appears to reflect the presence (gratings) or absence (dot fields and other textures) of strongly oriented contours. Our results suggest that gamma rhythms in these layers are propagated from earlier visual cortex, while those in the deeper layers may emerge in area MT. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  19. Magnetic Stimulation Studies of Foveal Representation

    ERIC Educational Resources Information Center

    Lavidor, Michal; Walsh, Vincent

    2004-01-01

    The right and left visual fields each project to the contralateral cerebral hemispheres, but the extent of the functional overlap of the two hemifields along the vertical meridian is still under debate. After presenting the spatial, temporal, and functional specifications of Transcranial Magnetic Stimulation (TMS), we show that TMS is particularly…

  20. ESTIMATING NITROGEN AND TIDAL EXCHANGE IN A NORTH PACIFIC ESTUARY WITH EPA'S VISUAL PLUMES PDSW MODEL

    EPA Science Inventory

    Accurate assessments of nutrient levels in coastal waters are required to determine the nutrient effects of increasing population pressure on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources an...

  1. ESTIMATING NITROGEN AND TIDAL EXCHANGE IN A NORTH PACIFIC ESTUARY WITH EPA'S VISUAL PLUMES PDSW MODEL

    EPA Science Inventory

    Accurate assessments of nutrient levels in coastal waters are required to determine the nutrient effects of increasing population pressure on coastal ecosystems. To accomplish this goal, in-field data with sufficient temporal resolution are required to define nutrient sources and...

  2. Audio-visual temporal perception in children with restored hearing.

    PubMed

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  3. Wide-field motion tuning in nocturnal hawkmoths

    PubMed Central

    Theobald, Jamie C.; Warrant, Eric J.; O'Carroll, David C.

    2010-01-01

    Nocturnal hawkmoths are known for impressive visually guided behaviours in dim light, such as hovering while feeding from nectar-bearing flowers. This requires tight visual feedback to estimate and counter relative motion. Discrimination of low velocities, as required for stable hovering flight, is fundamentally limited by spatial resolution, yet in the evolution of eyes for nocturnal vision, maintenance of high spatial acuity compromises absolute sensitivity. To investigate these trade-offs, we compared responses of wide-field motion-sensitive neurons in three species of hawkmoth: Manduca sexta (a crepuscular hoverer), Deilephila elpenor (a fully nocturnal hoverer) and Acherontia atropos (a fully nocturnal hawkmoth that does not hover as it feeds uniquely from honey in bees' nests). We show that despite smaller eyes, the motion pathway of D. elpenor is tuned to higher spatial frequencies and lower temporal frequencies than A. atropos, consistent with D. elpenor's need to detect low velocities for hovering. Acherontia atropos, however, presumably evolved low-light sensitivity without sacrificing temporal acuity. Manduca sexta, active at higher light levels, is tuned to the highest spatial frequencies of the three and temporal frequencies comparable with A. atropos. This yields similar tuning to low velocities as in D. elpenor, but with the advantage of shorter neural delays in processing motion. PMID:19906663

  4. Optimization of neural retinal visual motor strategies in recovery of visual acuity following acute laser-induced macula injury

    NASA Astrophysics Data System (ADS)

    Zwick, Harry; Ness, James W.; Loveday, J.; Molchany, Jerome W.; Stuck, Bruce E.

    1997-05-01

    Laser induced damage to the retina may produce immediate and serious loss in visual acuity as well as subsequent recovery of visual acuity over a 1 to 6 month post exposure period. While acuity may recover, full utilization of the foveal region may not return. In one patient, a superior/temporal preferred retinal location (PRL) was apparent, while a second patient demonstrated significant foveal involvement and contrast sensitivity more reflective of foveal than parafoveal involvement. These conditions of injury wee simulated by using an artificial scotoma technique which optically stabilized a 5 degree opacity in the center of the visual field. The transmission of spatially degraded target information in the scotoma was 0 percent, 5 percent and 95 percent. Contrast sensitivity for the 0 percent and 5 percent transmission scotoma showed broad spatial frequency suppression as opposed to a bipartite contrast sensitivity function with a narrow sensitivity loss at 3 cycles/degree for the 95 percent transmission scotoma. A PRL shift to superior temporal retina with a concomitant change in accommodation was noted as target resolution became more demanding. These findings suggest that restoration of visual acuity in human laser accidents may depend upon the functionality of complex retinal and cortical adaptive mechanisms.

  5. Associative-memory representations emerge as shared spatial patterns of theta activity spanning the primate temporal cortex

    PubMed Central

    Nakahara, Kiyoshi; Adachi, Ken; Kawasaki, Keisuke; Matsuo, Takeshi; Sawahata, Hirohito; Majima, Kei; Takeda, Masaki; Sugiyama, Sayaka; Nakata, Ryota; Iijima, Atsuhiko; Tanigawa, Hisashi; Suzuki, Takafumi; Kamitani, Yukiyasu; Hasegawa, Isao

    2016-01-01

    Highly localized neuronal spikes in primate temporal cortex can encode associative memory; however, whether memory formation involves area-wide reorganization of ensemble activity, which often accompanies rhythmicity, or just local microcircuit-level plasticity, remains elusive. Using high-density electrocorticography, we capture local-field potentials spanning the monkey temporal lobes, and show that the visual pair-association (PA) memory is encoded in spatial patterns of theta activity in areas TE, 36, and, partially, in the parahippocampal cortex, but not in the entorhinal cortex. The theta patterns elicited by learned paired associates are distinct between pairs, but similar within pairs. This pattern similarity, emerging through novel PA learning, allows a machine-learning decoder trained on theta patterns elicited by a particular visual item to correctly predict the identity of those elicited by its paired associate. Our results suggest that the formation and sharing of widespread cortical theta patterns via learning-induced reorganization are involved in the mechanisms of associative memory representation. PMID:27282247

  6. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts.

    PubMed

    Sadeh, Morteza; Sajad, Amirsaman; Wang, Hongying; Yan, Xiaogang; Crawford, John Douglas

    2015-12-01

    We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Visual receptive field properties of cells in the optic tectum of the archer fish.

    PubMed

    Ben-Tov, Mor; Kopilevich, Ivgeny; Donchin, Opher; Ben-Shahar, Ohad; Giladi, Chen; Segev, Ronen

    2013-08-01

    The archer fish is well known for its extreme visual behavior in shooting water jets at prey hanging on vegetation above water. This fish is a promising model in the study of visual system function because it can be trained to respond to artificial targets and thus to provide valuable psychophysical data. Although much behavioral data have indeed been collected over the past two decades, little is known about the functional organization of the main visual area supporting this visual behavior, namely, the fish optic tectum. In this article we focus on a fundamental aspect of this functional organization and provide a detailed analysis of receptive field properties of cells in the archer fish optic tectum. Using extracellular measurements to record activities of single cells, we first measure their retinotectal mapping. We then determine their receptive field properties such as size, selectivity for stimulus direction and orientation, tuning for spatial frequency, and tuning for temporal frequency. Finally, on the basis of all these measurements, we demonstrate that optic tectum cells can be classified into three categories: orientation-tuned cells, direction-tuned cells, and direction-agnostic cells. Our results provide an essential basis for future investigations of information processing in the archer fish visual system.

  8. Gamma-oscillations modulated by picture naming and word reading: Intracranial recording in epileptic patients

    PubMed Central

    Wu, Helen C.; Nagasawa, Tetsuro; Brown, Erik C.; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi

    2011-01-01

    Objective We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. Methods We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Results Both tasks commonly elicited gamma-augmentation (maximally at 80–100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Conclusions Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. Significance The present study increases our understanding of the visual-language pathways. PMID:21498109

  9. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of “Green” Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex

    PubMed Central

    Rauscher, Franziska G.; Plant, Gordon T.; James-Galton, Merle; Barbur, John L.

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d’Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength (“red”) and middle wavelength (“green”) regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient’s results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both “red/green” and “yellow/blue” directions in colour space, the subject’s lower left quadrant showed a marked asymmetry in “red/green” thresholds with the greatest loss of sensitivity towards the “green” region of the spectrum locus. This spatially localized asymmetric loss of “green” but not “red” sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent. PMID:27956924

  10. Cross-Modal and Intra-Modal Characteristics of Visual Function and Speech Perception Performance in Postlingually Deafened, Cochlear Implant Users

    PubMed Central

    Kim, Min-Beom; Shim, Hyun-Yong; Jin, Sun Hwa; Kang, Soojin; Woo, Jihwan; Han, Jong Chul; Lee, Ji Young; Kim, Martha; Cho, Yang-Sun

    2016-01-01

    Evidence of visual-auditory cross-modal plasticity in deaf individuals has been widely reported. Superior visual abilities of deaf individuals have been shown to result in enhanced reactivity to visual events and/or enhanced peripheral spatial attention. The goal of this study was to investigate the association between visual-auditory cross-modal plasticity and speech perception in post-lingually deafened, adult cochlear implant (CI) users. Post-lingually deafened adults with CIs (N = 14) and a group of normal hearing, adult controls (N = 12) participated in this study. The CI participants were divided into a good performer group (good CI, N = 7) and a poor performer group (poor CI, N = 7) based on word recognition scores. Visual evoked potentials (VEP) were recorded from the temporal and occipital cortex to assess reactivity. Visual field (VF) testing was used to assess spatial attention and Goldmann perimetry measures were analyzed to identify differences across groups in the VF. The association of the amplitude of the P1 VEP response over the right temporal or occipital cortex among three groups (control, good CI, poor CI) was analyzed. In addition, the association between VF by different stimuli and word perception score was evaluated. The P1 VEP amplitude recorded from the right temporal cortex was larger in the group of poorly performing CI users than the group of good performers. The P1 amplitude recorded from electrodes near the occipital cortex was smaller for the poor performing group. P1 VEP amplitude in right temporal lobe was negatively correlated with speech perception outcomes for the CI participants (r = -0.736, P = 0.003). However, P1 VEP amplitude measures recorded from near the occipital cortex had a positive correlation with speech perception outcome in the CI participants (r = 0.775, P = 0.001). In VF analysis, CI users showed narrowed central VF (VF to low intensity stimuli). However, their far peripheral VF (VF to high intensity stimuli) was not different from the controls. In addition, the extent of their central VF was positively correlated with speech perception outcome (r = 0.669, P = 0.009). Persistent visual activation in right temporal cortex even after CI causes negative effect on outcome in post-lingual deaf adults. We interpret these results to suggest that insufficient intra-modal (visual) compensation by the occipital cortex may cause negative effects on outcome. Based on our results, it appears that a narrowed central VF could help identify CI users with poor outcomes with their device. PMID:26848755

  11. [A rare cause of optic neuropathy: Cassava].

    PubMed

    Zeboulon, P; Vignal-Clermont, C; Baudouin, C; Labbé, A

    2016-06-01

    Cassava root is a staple food for almost 500 million people worldwide. Excessive consumption of it is a rare cause of optic neuropathy. Ten patients diagnosed with cassava root related optic neuropathy were included in this retrospective study. Diagnostic criteria were a bilateral optic neuropathy preceded by significant cassava root consumption. Differential diagnoses were excluded through a neuro-ophthalmic examination, blood tests and a brain MRI. All patients had visual field examination and OCT retinal nerve fiber layer (RNFL) analysis as well as an evaluation of their cassava consumption. All patients had a bilateral optic nerve head atrophy or pallor predominantly located into the temporal sector. Visual field defects consisted of a central or cecocentral scotoma for all patients. RNFL showed lower values only in the temporal sector. Mean duration of cassava consumption prior to the appearance of visual symptoms was 22.7±11.2 years with a mean of 2.57±0.53 cassava-based meals per week. Cassava related optic neuropathy is possibly due to its high cyanide content and enabled by a specific amino-acid deficiency. Cassava root chronic consumption is a rare, underappreciated cause of optic neuropathy and its exact mechanism is still uncertain. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  12. Immersive Earth Science: Data Visualization in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Skolnik, S.; Ramirez-Linan, R.

    2017-12-01

    Utilizing next generation technology, Navteca's exploration of 3D and volumetric temporal data in Virtual Reality (VR) takes advantage of immersive user experiences where stakeholders are literally inside the data. No longer restricted by the edges of a screen, VR provides an innovative way of viewing spatially distributed 2D and 3D data that leverages a 360 field of view and positional-tracking input, allowing users to see and experience data differently. These concepts are relevant to many sectors, industries, and fields of study, as real-time collaboration in VR can enhance understanding and mission with VR visualizations that display temporally-aware 3D, meteorological, and other volumetric datasets. The ability to view data that is traditionally "difficult" to visualize, such as subsurface features or air columns, is a particularly compelling use of the technology. Various development iterations have resulted in Navteca's proof of concept that imports and renders volumetric point-cloud data in the virtual reality environment by interfacing PC-based VR hardware to a back-end server and popular GIS software. The integration of the geo-located data in VR and subsequent display of changeable basemaps, overlaid datasets, and the ability to zoom, navigate, and select specific areas show the potential for immersive VR to revolutionize the way Earth data is viewed, analyzed, and communicated.

  13. Ocular findings in MELAS syndrome – a case report.

    PubMed

    Modrzejewska, Monika; Chrzanowska, Martyna; Modrzejewska, Anna; Romanowska, Hanna; Ostrowska, Iwona; Giżewska, Maria

    We present a case of a child with MELAS syndrome (mitochondrial encephalo-myopathy with lactic acidosis and stroke-like episodes), discussing clinical manifestation, ocular findings and diagnostic challenges. Predominant ocular symptom was a transient complete visual loss, while the predominant ocular sign was a visual field defect. The diagnosia was based on clinical manifestation, laboratory tests, brain scans and genetic testing which confirmed the pathognomonic mutation in the MTTL1 gene encoding the mitochondrial tRNA for leucine 3243> G. Ocular examination demonstrated decreased visual acuity (with bilateral best corrected visual acuity of .1). Periodical, transient visual loss and visual field defects were clinically predominant. Specialist investigations were carried out, which demonstrated homonymous hemianopia (kinetic perimetry), bilateral partial optic nerve atrophy (RetCam). Funduscopy and electrophysiology mfERG study did not confirm features of retinitis pigmentosa. The brain scans revealed numerous small cortical ischemic lesions within the frontal, parietal and temporal lobes, post-stroke focal areas within the occipital lobes and diffuse calcifications of the basal ganglia. During several years of follow-up, visual field defects showed progressive concentric narrowing. The patient received a long-term treatment with arginine, coenzyme Q and vitamin D, both oral and intravenous, but no beneficial effect for the improvement of ophthalmic condition was observed. As it is the case in severe MELAS syndrome, the course of disease was fatal and the patientdied at the age of 14.

  14. Evaluating Glaucomatous Retinal Nerve Fiber Damage by GDx VCC Polarimetry in Taiwan Chinese Population

    PubMed Central

    Chen, Hsin-Yi; Huang, Mei-Ling; Huang, Wei-Cheng

    2010-01-01

    Purpose To study the capability of scanning laser polarimetry with variable corneal compensation (GDx VCC) to detect differences in retinal nerve fiber layer thickness between normal and glaucomatous eyes in a Taiwan Chinese population. Methods This study included 44 normal eyes and 107 glaucomatous eyes. The glaucomatous eyes were divided into three subgroups on the basis of its visual field defects (early, moderate, severe). Each subject underwent a GDx-VCC exam and visual field testing. The area under the receiver-operating characteristic curve (AROC) of each relevant parameter was used to differentiate normal from each glaucoma subgroup, respectively. The correlation between visual field index and each parameter was evaluated for the eyes in the glaucoma group. Results For normal vs. early glaucoma, the parameter with the best AROC was Nerve fiber indicator (NFI) (0.942). For normal vs. moderate glaucoma, the parameter showing the best AROC was NFI (0.985). For normal vs. severe glaucoma, the parameter that had the best AROC was NFI (1.000). For early vs. moderate glaucoma, the parameter with the best AROC was NFI (0.732). For moderate vs. severe, the parameter showing the best AROC was temporal-superior-nasal-inferior-temporal average (0.652). For early vs. severe, the parameter with the best AROC was NFI (0.852). Conclusions GDx-VCC-measured parameters may serve as a useful tool to distinguish normal from glaucomatous eyes; in particular, NFI turned out to be the best discriminating parameter.

  15. SPECTRAL DOMAIN VERSUS SWEPT SOURCE OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY OF THE RETINAL CAPILLARY PLEXUSES IN SICKLE CELL MACULOPATHY.

    PubMed

    Jung, Jesse J; Chen, Michael H; Frambach, Caroline R; Rofagha, Soraya; Lee, Scott S

    2018-01-01

    To compare the spectral domain and swept source optical coherence tomography angiography findings in two cases of sickle cell maculopathy. A 53-year-old man and a 24-year-old man both with sickle cell disease (hemoglobin SS) presented with no visual complaints; Humphrey visual field testing demonstrated asymptomatic paracentral scotomas that extended nasally in the involved eyes. Clinical examination and multimodal imaging including spectral domain and swept source optical coherence tomography, and spectral domain optical coherence tomography angiography and swept source optical coherence tomography angiography (Carl Zeiss Meditec Inc, Dublin, CA) were performed. Fundus examination of both patients revealed subtle thinning of the macula. En-face swept source optical coherence tomography confirmed the extent of the thinning correlating with the functional paracentral scotomas on Humphrey visual field. Swept source optical coherence tomography B-scan revealed multiple confluent areas of inner nuclear thinning and significant temporal retinal atrophy. En-face 6 × 6-mm spectral domain optical coherence tomography angiography of the macula demonstrated greater loss of the deep capillary plexus compared with the superficial capillary plexus. Swept source optical coherence tomography angiography 12 × 12-mm imaging captured the same macular findings and loss of both plexuses temporally outside the macula. In these two cases of sickle cell maculopathy, deep capillary plexus ischemia is more extensive within the macula, whereas both the superficial capillary plexus and deep capillary plexus are involved outside the macula likely due to the greater oxygen demands and watershed nature of these areas. Swept source optical coherence tomography angiography clearly demonstrates the angiographic extent of the disease correlating with the Humphrey visual field scotomas and confluent areas of inner nuclear atrophy.

  16. The loss of short-term visual representations over time: decay or temporal distinctiveness?

    PubMed

    Mercer, Tom

    2014-12-01

    There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  17. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-01-01

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108

  18. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence.

    PubMed

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-06-10

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.

  19. The grouping benefit in extinction: overcoming the temporal order bias.

    PubMed

    Rappaport, Sarah J; Riddoch, M Jane; Humphreys, Glyn W

    2011-01-01

    Grouping between contra- and ipsilesional stimuli can alleviate the lateralised bias in spatial extinction (Gilchrist, Humphreys, & Riddoch, 1996; Ward, Goodrich, & Driver, 1994). In the current study we demonstrate for the first time that perceptual grouping can also modulate the spatio/temporal biases in temporal order judgements affecting the temporal as well as the spatial coding of stimuli. Perceived temporal order was assessed by presenting two coloured letter stimuli in either hemi-field temporally segregated by a range of onset-intervals. Items were either identical (grouping condition) or differed in both shape and colour (non-grouping condition). Observers were required to indicate which item appeared second. Patients with visual extinction had a bias against the contralesional item appearing first, but this was modulated by perceptual grouping. When both items were identical in shape and colour the temporal bias against reporting the contralesional item was reduced. The results suggest that grouping can alter the coding of temporal relations between stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Modeling mesoscopic cortical dynamics using a mean-field model of conductance-based networks of adaptive exponential integrate-and-fire neurons.

    PubMed

    Zerlaut, Yann; Chemla, Sandrine; Chavane, Frederic; Destexhe, Alain

    2018-02-01

    Voltage-sensitive dye imaging (VSDi) has revealed fundamental properties of neocortical processing at macroscopic scales. Since for each pixel VSDi signals report the average membrane potential over hundreds of neurons, it seems natural to use a mean-field formalism to model such signals. Here, we present a mean-field model of networks of Adaptive Exponential (AdEx) integrate-and-fire neurons, with conductance-based synaptic interactions. We study a network of regular-spiking (RS) excitatory neurons and fast-spiking (FS) inhibitory neurons. We use a Master Equation formalism, together with a semi-analytic approach to the transfer function of AdEx neurons to describe the average dynamics of the coupled populations. We compare the predictions of this mean-field model to simulated networks of RS-FS cells, first at the level of the spontaneous activity of the network, which is well predicted by the analytical description. Second, we investigate the response of the network to time-varying external input, and show that the mean-field model predicts the response time course of the population. Finally, to model VSDi signals, we consider a one-dimensional ring model made of interconnected RS-FS mean-field units. We found that this model can reproduce the spatio-temporal patterns seen in VSDi of awake monkey visual cortex as a response to local and transient visual stimuli. Conversely, we show that the model allows one to infer physiological parameters from the experimentally-recorded spatio-temporal patterns.

  1. Memory as Perception of the Past: Compressed Time inMind and Brain.

    PubMed

    Howard, Marc W

    2018-02-01

    In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding

    PubMed Central

    Desantis, Andrea; Haggard, Patrick

    2016-01-01

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events. PMID:27982063

  3. How actions shape perception: learning action-outcome relations and predicting sensory outcomes promote audio-visual temporal binding.

    PubMed

    Desantis, Andrea; Haggard, Patrick

    2016-12-16

    To maintain a temporally-unified representation of audio and visual features of objects in our environment, the brain recalibrates audio-visual simultaneity. This process allows adjustment for both differences in time of transmission and time for processing of audio and visual signals. In four experiments, we show that the cognitive processes for controlling instrumental actions also have strong influence on audio-visual recalibration. Participants learned that right and left hand button-presses each produced a specific audio-visual stimulus. Following one action the audio preceded the visual stimulus, while for the other action audio lagged vision. In a subsequent test phase, left and right button-press generated either the same audio-visual stimulus as learned initially, or the pair associated with the other action. We observed recalibration of simultaneity only for previously-learned audio-visual outcomes. Thus, learning an action-outcome relation promotes temporal grouping of the audio and visual events within the outcome pair, contributing to the creation of a temporally unified multisensory object. This suggests that learning action-outcome relations and the prediction of perceptual outcomes can provide an integrative temporal structure for our experiences of external events.

  4. STRAD Wheel: Web-Based Library for Visualizing Temporal Data.

    PubMed

    Fernondez-Prieto, Diana; Naranjo-Valero, Carol; Hernandez, Jose Tiberio; Hagen, Hans

    2017-01-01

    Recent advances in web development, including the introduction of HTML5, have opened a door for visualization researchers and developers to quickly access larger audiences worldwide. Open source libraries for the creation of interactive visualizations are becoming more specialized but also modular, which makes them easy to incorporate in domain-specific applications. In this context, the authors developed STRAD (Spatio-Temporal-Radar) Wheel, a web-based library that focuses on the visualization and interactive query of temporal data in a compact view with multiple temporal granularities. This article includes two application examples in urban planning to help illustrate the proposed visualization's use in practice.

  5. Lack of oblique astigmatism in the chicken eye.

    PubMed

    Maier, Felix M; Howland, Howard C; Ohlendorf, Arne; Wahl, Siegfried; Schaeffel, Frank

    2015-04-01

    Primate eyes display considerable oblique off-axis astigmatism which could provide information on the sign of defocus that is needed for emmetropization. The pattern of peripheral astigmatism is not known in the chicken eye, a common model of myopia. Peripheral astigmatism was mapped out over the horizontal visual field in three chickens, 43 days old, and in three near emmetropic human subjects, average age 34.7years, using infrared photoretinoscopy. There were no differences in astigmatism between humans and chickens in the central visual field (chicks -0.35D, humans -0.65D, n.s.) but large differences in the periphery (i.e. astigmatism at 40° in the temporal visual field: humans -4.21D, chicks -0.63D, p<0.001, unpaired t-test). The lack of peripheral astigmatism in chicks was not due to differences in corneal shape. Perhaps related to their superior peripheral optics, we found that chickens had excellent visual performance also in the far periphery. Using an automated optokinetic nystagmus paradigm, no difference was observed in spatial visual performance with vision restricted to either the central 67° of the visual field or to the periphery beyond 67°. Accommodation was elicited by stimuli presented far out in the visual field. Transscleral images of single infrared LEDs showed no sign of peripheral astigmatism. The chick may be the first terrestrial vertebrate described to lack oblique astigmatism. Since corneal shape cannot account for the difference in astigmatism in humans and chicks, it must trace back to the design of the crystalline lens. The lack of peripheral astigmatism in chicks also excludes a role in emmetropization. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Multiple adaptable mechanisms early in the primate visual pathway

    PubMed Central

    Dhruv, Neel T.; Tailby, Chris; Sokol, Sach H.; Lennie, Peter

    2011-01-01

    We describe experiments that isolate and characterize multiple adaptable mechanisms that influence responses of orientation-selective neurons in primary visual cortex (V1) of anesthetized macaque (Macaca fascicularis). The results suggest that three adaptable stages of machinery shape neural responses in V1: a broadly-tuned early stage and a spatio-temporally tuned later stage, both of which provide excitatory input, and a normalization pool that is also broadly tuned. The early stage and the normalization pool are revealed by adapting gratings that themselves fail to evoke a response from the neuron: either low temporal frequency gratings at the null orientation or gratings of any orientation drifting at high temporal frequencies. When effective, adapting stimuli that altered the sensitivity of these two mechanisms caused reductions of contrast gain and often brought about a paradoxical increase in response gain due to a relatively greater desensitization of the normalization pool. The tuned mechanism is desensitized only by stimuli well-matched to a neuron’s receptive field. We could thus infer desensitization of the tuned mechanism by comparing effects obtained with adapting gratings of preferred and null orientation modulated at low temporal frequencies. PMID:22016535

  7. [Describe and convince: visual rhetoric of cinematography in medicine].

    PubMed

    Panese, Francesco

    2009-01-01

    The tools of visualisation occupy a central place in medicine. Far from being simple accessories of glance, they literally constitute objects of medicine. Such empirical acknowledgement and epistemological position open a vast field of investigation: visual technologies of medical knowledge. This article studies the development and transformation of medical objects which have permitted to assess the role of temporality in the epistemology of medicine. It firstly examines the general problem of the relationships between cinema, animated image and medicine and secondly, the contribution of the German doctor Martin Weiser to medical cinematography as a method. Finally, a typology is sketched out organising the variety of the visual technology of movement under the perspective of the development of specific visual techniques in medicine.

  8. Predictive Factors for Visual Field Conversion: Comparison of Scanning Laser Polarimetry and Optical Coherence Tomography.

    PubMed

    Diekmann, Theresa; Schrems-Hoesl, Laura M; Mardin, Christian Y; Laemmer, Robert; Horn, Folkert K; Kruse, Friedrich E; Schrems, Wolfgang A

    2018-02-01

    The purpose of this study was to compare the ability of scanning laser polarimetry (SLP) and spectral-domain optical coherence tomography (SD-OCT) to predict future visual field conversion of subjects with ocular hypertension and early glaucoma. All patients were recruited from the Erlangen glaucoma registry and examined using standard automated perimetry, 24-hour intraocular pressure profile, and optic disc photography. Peripapillary retinal nerve fiber layer thickness (RNFL) measurements were obtained by SLP (GDx-VCC) and SD-OCT (Spectralis OCT). Positive and negative predictive values (PPV, NPV) were calculated for morphologic parameters of SLP and SD-OCT. Kaplan-Meier survival curves were plotted and log-rank tests were performed to compare the survival distributions. Contingency tables and Venn-diagrams were calculated to compare the predictive ability. The study included 207 patients-75 with ocular hypertension, 85 with early glaucoma, and 47 controls. Median follow-up was 4.5 years. A total of 29 patients (14.0%) developed visual field conversion during follow-up. SLP temporal-inferior RNFL [0.667; 95% confidence interval (CI), 0.281-0.935] and SD-OCT temporal-inferior RNFL (0.571; 95% CI, 0.317-0.802) achieved the highest PPV; nerve fiber indicator (0.923; 95% CI, 0.876-0.957) and SD-OCT mean (0.898; 95% CI, 0.847-0.937) achieved the highest NPV of all investigated parameters. The Kaplan-Meier curves confirmed significantly higher survival for subjects within normal limits of measurements of both devices (P<0.001). Venn diagrams tested with McNemar test statistics showed no significant difference for PPV (P=0.219) or NPV (P=0.678). Both GDx-VCC and SD-OCT demonstrate comparable results in predicting future visual field conversion if taking typical scans for GDx-VCC. In addition, the likelihood ratios suggest that GDx-VCC's nerve fiber indicator<30 may be the most useful parameter to confirm future nonconversion. (http://www.ClinicalTrials.gov number, NTC00494923; Erlangen Glaucoma Registry).

  9. On-chip visual perception of motion: a bio-inspired connectionist model on FPGA.

    PubMed

    Torres-Huitzil, César; Girau, Bernard; Castellanos-Sánchez, Claudio

    2005-01-01

    Visual motion provides useful information to understand the dynamics of a scene to allow intelligent systems interact with their environment. Motion computation is usually restricted by real time requirements that need the design and implementation of specific hardware architectures. In this paper, the design of hardware architecture for a bio-inspired neural model for motion estimation is presented. The motion estimation is based on a strongly localized bio-inspired connectionist model with a particular adaptation of spatio-temporal Gabor-like filtering. The architecture is constituted by three main modules that perform spatial, temporal, and excitatory-inhibitory connectionist processing. The biomimetic architecture is modeled, simulated and validated in VHDL. The synthesis results on a Field Programmable Gate Array (FPGA) device show the potential achievement of real-time performance at an affordable silicon area.

  10. Detecting time-specific differences between temporal nonlinear curves: Analyzing data from the visual world paradigm

    PubMed Central

    Oleson, Jacob J; Cavanaugh, Joseph E; McMurray, Bob; Brown, Grant

    2015-01-01

    In multiple fields of study, time series measured at high frequencies are used to estimate population curves that describe the temporal evolution of some characteristic of interest. These curves are typically nonlinear, and the deviations of each series from the corresponding curve are highly autocorrelated. In this scenario, we propose a procedure to compare the response curves for different groups at specific points in time. The method involves fitting the curves, performing potentially hundreds of serially correlated tests, and appropriately adjusting the overall alpha level of the tests. Our motivating application comes from psycholinguistics and the visual world paradigm. We describe how the proposed technique can be adapted to compare fixation curves within subjects as well as between groups. Our results lead to conclusions beyond the scope of previous analyses. PMID:26400088

  11. Extracting heading and temporal range from optic flow: Human performance issues

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Perrone, John A.; Stone, Leland; Banks, Martin S.; Crowell, James A.

    1993-01-01

    Pilots are able to extract information about their vehicle motion and environmental structure from dynamic transformations in the out-the-window scene. In this presentation, we focus on the information in the optic flow which specifies vehicle heading and distance to objects in the environment, scaled to a temporal metric. In particular, we are concerned with modeling how the human operators extract the necessary information, and what factors impact their ability to utilize the critical information. In general, the psychophysical data suggest that the human visual system is fairly robust to degradations in the visual display, e.g., reduced contrast and resolution or restricted field of view. However, extraneous motion flow, i.e., introduced by sensor rotation, greatly compromises human performance. The implications of these models and data for enhanced/synthetic vision systems are discussed.

  12. Preserving information in neural transmission.

    PubMed

    Sincich, Lawrence C; Horton, Jonathan C; Sharpee, Tatyana O

    2009-05-13

    Along most neural pathways, the spike trains transmitted from one neuron to the next are altered. In the process, neurons can either achieve a more efficient stimulus representation, or extract some biologically important stimulus parameter, or succeed at both. We recorded the inputs from single retinal ganglion cells and the outputs from connected lateral geniculate neurons in the macaque to examine how visual signals are relayed from retina to cortex. We found that geniculate neurons re-encoded multiple temporal stimulus features to yield output spikes that carried more information about stimuli than was available in each input spike. The coding transformation of some relay neurons occurred with no decrement in information rate, despite output spike rates that averaged half the input spike rates. This preservation of transmitted information was achieved by the short-term summation of inputs that geniculate neurons require to spike. A reduced model of the retinal and geniculate visual responses, based on two stimulus features and their associated nonlinearities, could account for >85% of the total information available in the spike trains and the preserved information transmission. These results apply to neurons operating on a single time-varying input, suggesting that synaptic temporal integration can alter the temporal receptive field properties to create a more efficient representation of visual signals in the thalamus than the retina.

  13. Do working memory-driven attention shifts speed up visual awareness?

    PubMed

    Pan, Yi; Cheng, Qiu-Ping

    2011-11-01

    Previous research has shown that content representations in working memory (WM) can bias attention in favor of matching stimuli in the scene. Using a visual prior-entry procedure, we here investigate whether such WM-driven attention shifts can speed up the conscious awareness of memory-matching relative to memory-mismatching stimuli. Participants were asked to hold a color cue in WM and to subsequently perform a temporal order judgment (TOJ) task by reporting either of two different-colored circles (presented to the left and right of fixation with a variable temporal interval) as having the first onset. One of the two TOJ circles could match the memory cue in color. We found that awareness of the temporal order of the circle onsets was not affected by the contents of WM, even when participants were explicitly informed that one of the TOJ circles would always match the WM contents. The null effect of WM on TOJs was not due to an inability of the memory-matching item to capture attention, since response times to the target in a follow-up experiment were improved when it appeared at the location of the memory-matching item. The present findings suggest that WM-driven attention shifts cannot accelerate phenomenal awareness of matching stimuli in the visual field.

  14. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  15. Evaluation of peripheral binocular visual field in patients with glaucoma: a pilot study.

    PubMed

    Ana, Banc; Cristina, Stan; Dorin, Chiselita

    2016-01-01

    The objective of this study was to evaluate the peripheral binocular visual field (PBVF) in patients with glaucoma using the threshold strategy of Humphrey Field Analyzer. We conducted a case-control pilot study in which we enrolled 59 patients with glaucoma and 20 controls. All participants were evaluated using a custom PBVF test and central 24 degrees monocular visual field tests for each eye using the threshold strategy. The central binocular visual field (CBVF) was predicted from the monocular tests using the most sensitive point at each field location. The glaucoma patients were grouped according to Hodapp classification and age. The PBVF was compared to controls and the relationship between the PBVF and CBVF was tested. The areas of frame-induced artefacts were determined (over 50 degrees in each temporal field, 24 degrees superiorly and 45 degrees inferiorly) and excluded from interpretation. The patients presented a statistically significant generalized decrease of the peripheral retinal sensitivity compared to controls for Hodapp initial stage--groups aged 50-59 (t = 11.93 > 2.06; p < 0.05) and 60-69 (t = 7.55 > 2.06; p < 0.05). For the initial Hodapp stage there was no significant relationship between PBVF and CBVF (r = 0.39). For the moderate and advanced Hodapp stages, the interpretation of data was done separately for each patient. This pilot study suggests that glaucoma patients present a decrease of PBVF compared to controls and CBVF cannot predict the PBVF in glaucoma.

  16. Structure-function relationships using spectral-domain optical coherence tomography: comparison with scanning laser polarimetry.

    PubMed

    Aptel, Florent; Sayous, Romain; Fortoul, Vincent; Beccat, Sylvain; Denis, Philippe

    2010-12-01

    To evaluate and compare the regional relationships between visual field sensitivity and retinal nerve fiber layer (RNFL) thickness as measured by spectral-domain optical coherence tomography (OCT) and scanning laser polarimetry. Prospective cross-sectional study. One hundred and twenty eyes of 120 patients (40 with healthy eyes, 40 with suspected glaucoma, and 40 with glaucoma) were tested on Cirrus-OCT, GDx VCC, and standard automated perimetry. Raw data on RNFL thickness were extracted for 256 peripapillary sectors of 1.40625 degrees each for the OCT measurement ellipse and 64 peripapillary sectors of 5.625 degrees each for the GDx VCC measurement ellipse. Correlations between peripapillary RNFL thickness in 6 sectors and visual field sensitivity in the 6 corresponding areas were evaluated using linear and logarithmic regression analysis. Receiver operating curve areas were calculated for each instrument. With spectral-domain OCT, the correlations (r(2)) between RNFL thickness and visual field sensitivity ranged from 0.082 (nasal RNFL and corresponding visual field area, linear regression) to 0.726 (supratemporal RNFL and corresponding visual field area, logarithmic regression). By comparison, with GDx-VCC, the correlations ranged from 0.062 (temporal RNFL and corresponding visual field area, linear regression) to 0.362 (supratemporal RNFL and corresponding visual field area, logarithmic regression). In pairwise comparisons, these structure-function correlations were generally stronger with spectral-domain OCT than with GDx VCC and with logarithmic regression than with linear regression. The largest areas under the receiver operating curve were seen for OCT superior thickness (0.963 ± 0.022; P < .001) in eyes with glaucoma and for OCT average thickness (0.888 ± 0.072; P < .001) in eyes with suspected glaucoma. The structure-function relationship was significantly stronger with spectral-domain OCT than with scanning laser polarimetry, and was better expressed logarithmically than linearly. Measurements with these 2 instruments should not be considered to be interchangeable. Copyright © 2010 Elsevier Inc. All rights reserved.

  17. Beyond time and space: The effect of a lateralized sustained attention task and brain stimulation on spatial and selective attention.

    PubMed

    Shalev, Nir; De Wandel, Linde; Dockree, Paul; Demeyere, Nele; Chechlacz, Magdalena

    2017-10-03

    The Theory of Visual Attention (TVA) provides a mathematical formalisation of the "biased competition" account of visual attention. Applying this model to individual performance in a free recall task allows the estimation of 5 independent attentional parameters: visual short-term memory (VSTM) capacity, speed of information processing, perceptual threshold of visual detection; attentional weights representing spatial distribution of attention (spatial bias), and the top-down selectivity index. While the TVA focuses on selection in space, complementary accounts of attention describe how attention is maintained over time, and how temporal processes interact with selection. A growing body of evidence indicates that different facets of attention interact and share common neural substrates. The aim of the current study was to modulate a spatial attentional bias via transfer effects, based on a mechanistic understanding of the interplay between spatial, selective and temporal aspects of attention. Specifically, we examined here: (i) whether a single administration of a lateralized sustained attention task could prime spatial orienting and lead to transferable changes in attentional weights (assigned to the left vs right hemi-field) and/or other attentional parameters assessed within the framework of TVA (Experiment 1); (ii) whether the effects of such spatial-priming on TVA parameters could be further enhanced by bi-parietal high frequency transcranial random noise stimulation (tRNS) (Experiment 2). Our results demonstrate that spatial attentional bias, as assessed within the TVA framework, was primed by sustaining attention towards the right hemi-field, but this spatial-priming effect did not occur when sustaining attention towards the left. Furthermore, we show that bi-parietal high-frequency tRNS combined with the rightward spatial-priming resulted in an increased attentional selectivity. To conclude, we present a novel, theory-driven method for attentional modulation providing important insights into how the spatial and temporal processes in attention interact with attentional selection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Visualizing Earth's Erupting Volcanoes and Wildfires: Seven Years of Data From the Earth Observing Mission

    NASA Astrophysics Data System (ADS)

    Wright, R.; Pilger, E.; Flynn, L. P.; Harris, A. J.

    2006-12-01

    Volcanic eruptions and wildfires are natural hazards that are truly global in their geographic scope, as well as being temporally very dynamic. As such, satellite remote sensing lends itself to their effective detection and monitoring. The results of such mapping can be communicated in the form of traditional static maps. However, most hazards have strong time-dependent forcing mechanisms (in the case of biomass burning, climate) and the dynamism of these geophysical phenomena requires a suitable method for their presentation. Here, we present visualizations of the amount of thermal energy radiated by all of Earth's sub-aerially erupting volcanoes, wildfires and industrial heat sources over a seven year period. These visualizations condense the results obtained from the near-real-time analysis of over 1.2 million MODIS (Moderate Resolution Imaging Spectro-radiometer) images, acquired from NASA's Terra and Aqua platforms. In the accompanying poster we will describe a) the raw data, b) how these data can be used to derive higher-order geophysical parameters, and c) how the visualization of these derived products adds scientific value to the raw data. The visualizations reveal spatio-temporal trends in fire radiated energy (and by proxy, biomass combustion rates and carbon emissions into the atmosphere), which are indiscernible in the static data set. Most notable are differences in biomass combustion between the North American and Eurasian Boreal forests. We also give examples relating to the development of lava flow-fields at Mount Etna (Italy) and Kilauea (USA), as well as variations in heat output from Iraqi oil fields, that span the onset of the 2003 Persian Gulf War. The raw data used to generate these visualizations are routinely made available via the Internet, as portable ASCII files. They can therefore be easily integrated with image datasets, by other researchers, to create their own visualizations.

  19. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.

  20. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  1. Video quality assessment method motivated by human visual perception

    NASA Astrophysics Data System (ADS)

    He, Meiling; Jiang, Gangyi; Yu, Mei; Song, Yang; Peng, Zongju; Shao, Feng

    2016-11-01

    Research on video quality assessment (VQA) plays a crucial role in improving the efficiency of video coding and the performance of video processing. It is well acknowledged that the motion energy model generates motion energy responses in a middle temporal area by simulating the receptive field of neurons in V1 for the motion perception of the human visual system. Motivated by the biological evidence for the visual motion perception, a VQA method is proposed in this paper, which comprises the motion perception quality index and the spatial index. To be more specific, the motion energy model is applied to evaluate the temporal distortion severity of each frequency component generated from the difference of Gaussian filter bank, which produces the motion perception quality index, and the gradient similarity measure is used to evaluate the spatial distortion of the video sequence to get the spatial quality index. The experimental results of the LIVE, CSIQ, and IVP video databases demonstrate that the random forests regression technique trained by the generated quality indices is highly correspondent to human visual perception and has many significant improvements than comparable well-performing methods. The proposed method has higher consistency with subjective perception and higher generalization capability.

  2. Habituation of medaka (Oryzias latipes) demonstrated by open-field testing.

    PubMed

    Matsunaga, Wataru; Watanabe, Eiji

    2010-10-01

    Habituation to novel environments is frequently studied to analyze cognitive phenotypes in animals, and an open-field test is generally conducted to investigate the changes that occur in animals during habituation. The test has not been used in behavioral studies of medaka (Oryzias latipes), which is recently being used in behavioral research. Therefore, we examined the open-field behavior of medaka on the basis of temporal changes in 2 conventional indexes of locomotion and position. The findings of our study clearly showed that medaka changed its behavior through multiple temporal phases as it became more familiar with new surroundings; this finding is consistent with those of other ethological studies in animals. During repeated open-field testing on 2 consecutive days, we observed that horizontal locomotion on the second day was less than that on the first day, which suggested that habituation is retained in fish for days. This temporal habituation was critically affected by water factors or visual cues of the tank, thereby suggesting that fish have spatial memory of their surroundings. Thus, the data from this study will afford useful fundamental information for behavioral phenotyping of medaka and for elucidating cognitive phenotypes in animals. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  3. γ-oscillations modulated by picture naming and word reading: intracranial recording in epileptic patients.

    PubMed

    Wu, Helen C; Nagasawa, Tetsuro; Brown, Erik C; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi

    2011-10-01

    We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Both tasks commonly elicited gamma-augmentation (maximally at 80-100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. The present study increases our understanding of the visual-language pathways. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Temporal resolution of orientation-defined texture segregation: a VEP study.

    PubMed

    Lachapelle, Julie; McKerral, Michelle; Jauffret, Colin; Bach, Michael

    2008-09-01

    Orientation is one of the visual dimensions that subserve figure-ground discrimination. A spatial gradient in orientation leads to "texture segregation", which is thought to be concurrent parallel processing across the visual field, without scanning. In the visual-evoked potential (VEP) a component can be isolated which is related to texture segregation ("tsVEP"). Our objective was to evaluate the temporal frequency dependence of the tsVEP to compare processing speed of low-level features (e.g., orientation, using the VEP, here denoted llVEP) with texture segregation because of a recent literature controversy in that regard. Visual-evoked potentials (VEPs) were recorded in seven normal adults. Oriented line segments of 0.1 degrees x 0.8 degrees at 100% contrast were presented in four different arrangements: either oriented in parallel for two homogeneous stimuli (from which were obtained the low-level VEP (llVEP)) or with a 90 degrees orientation gradient for two textured ones (from which were obtained the texture VEP). The orientation texture condition was presented at eight different temporal frequencies ranging from 7.5 to 45 Hz. Fourier analysis was used to isolate low-level components at the pattern-change frequency and texture-segregation components at half that frequency. For all subjects, there was lower high-cutoff frequency for tsVEP than for llVEPs, on average 12 Hz vs. 17 Hz (P = 0.017). The results suggest that the processing of feature gradients to extract texture segregation requires additional processing time, resulting in a lower fusion frequency.

  5. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap.

    PubMed

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.

  6. Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap

    PubMed Central

    Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya

    2013-01-01

    It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549

  7. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    PubMed

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Spatiotemporal analysis of brightness induction

    PubMed Central

    McCourt, Mark E.

    2011-01-01

    Brightness induction refers to a class of visual illusions in which the perceived intensity of a region of space is influenced by the luminance of surrounding regions. These illusions are significant because they provide insight into the neural organization of the visual system. A novel quadrature-phase motion cancelation technique was developed to measure the magnitude of the grating induction brightness illusion across a wide range of spatial frequencies, temporal frequencies and test field heights. Canceling contrast is greatest at low frequencies and declines with increasing frequency in both dimensions, and with increasing test field height. Canceling contrast scales as the product of inducing grating spatial frequency and test field height (the number of inducing grating cycles per test field height). When plotted using a spatial axis which indexes this product, the spatiotemporal induction surfaces for four test field heights can be described as four partially overlapping sections of a single larger surface. These properties of brightness induction are explained in the context of multiscale spatial filtering. The present study is the first to measure the magnitude of grating induction as a function of temporal frequency. Taken in conjunction with several other studies (Blakeslee & McCourt, 2008; Robinson & de Sa, 2008; Magnussen & Glad, 1975) the results of this study illustrate that at least one form of brightness induction is very much faster than that reported by DeValois et al. (1986) and Rossi and Paradiso (1996), and are inconsistent with the proposition that brightness induction results from a slow “filling in” process. PMID:21763339

  9. A neurocomputational model of figure-ground discrimination and target tracking.

    PubMed

    Sun, H; Liu, L; Guo, A

    1999-01-01

    A neurocomputational model is presented for figureground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains.

  10. The role of temporal structure in human vision.

    PubMed

    Blake, Randolph; Lee, Sang-Hun

    2005-03-01

    Gestalt psychologists identified several stimulus properties thought to underlie visual grouping and figure/ground segmentation, and among those properties was common fate: the tendency to group together individual objects that move together in the same direction at the same speed. Recent years have witnessed an upsurge of interest in visual grouping based on other time-dependent sources of visual information, including synchronized changes in luminance, in motion direction, and in figure/ ground relations. These various sources of temporal grouping information can be subsumed under the rubric temporal structure. In this article, the authors review evidence bearing on the effectiveness of temporal structure in visual grouping. They start with an overview of evidence bearing on temporal acuity of human vision, covering studies dealing with temporal integration and temporal differentiation. They then summarize psychophysical studies dealing with figure/ground segregation based on temporal phase differences in deterministic and stochastic events. The authors conclude with a brief discussion of neurophysiological implications of these results.

  11. Imaging of transient surface acoustic waves by full-field photorefractive interferometry.

    PubMed

    Xiong, Jichuan; Xu, Xiaodong; Glorieux, Christ; Matsuda, Osamu; Cheng, Liping

    2015-05-01

    A stroboscopic full-field imaging technique based on photorefractive interferometry for the visualization of rapidly changing surface displacement fields by using of a standard charge-coupled device (CCD) camera is presented. The photorefractive buildup of the space charge field during and after probe laser pulses is simulated numerically. The resulting anisotropic diffraction upon the refractive index grating and the interference between the polarization-rotated diffracted reference beam and the transmitted signal beam are modeled theoretically. The method is experimentally demonstrated by full-field imaging of the propagation of photoacoustically generated surface acoustic waves with a temporal resolution of nanoseconds. The surface acoustic wave propagation in a 23 mm × 17 mm area on an aluminum plate was visualized with 520 × 696 pixels of the CCD sensor, yielding a spatial resolution of 33 μm. The short pulse duration (8 ns) of the probe laser yields the capability of imaging SAWs with frequencies up to 60 MHz.

  12. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Emotion Separation Is Completed Early and It Depends on Visual Field Presentation

    PubMed Central

    Liu, Lichan; Ioannides, Andreas A.

    2010-01-01

    It is now apparent that the visual system reacts to stimuli very fast, with many brain areas activated within 100 ms. It is, however, unclear how much detail is extracted about stimulus properties in the early stages of visual processing. Here, using magnetoencephalography we show that the visual system separates different facial expressions of emotion well within 100 ms after image onset, and that this separation is processed differently depending on where in the visual field the stimulus is presented. Seven right-handed males participated in a face affect recognition experiment in which they viewed happy, fearful and neutral faces. Blocks of images were shown either at the center or in one of the four quadrants of the visual field. For centrally presented faces, the emotions were separated fast, first in the right superior temporal sulcus (STS; 35–48 ms), followed by the right amygdala (57–64 ms) and medial pre-frontal cortex (83–96 ms). For faces presented in the periphery, the emotions were separated first in the ipsilateral amygdala and contralateral STS. We conclude that amygdala and STS likely play a different role in early visual processing, recruiting distinct neural networks for action: the amygdala alerts sub-cortical centers for appropriate autonomic system response for fight or flight decisions, while the STS facilitates more cognitive appraisal of situations and links appropriate cortical sites together. It is then likely that different problems may arise when either network fails to initiate or function properly. PMID:20339549

  14. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  15. Toward the influence of temporal attention on the selection of targets in a visual search task: An ERP study.

    PubMed

    Rolke, Bettina; Festl, Freya; Seibold, Verena C

    2016-11-01

    We used ERPs to investigate whether temporal attention interacts with spatial attention and feature-based attention to enhance visual processing. We presented a visual search display containing one singleton stimulus among a set of homogenous distractors. Participants were asked to respond only to target singletons of a particular color and shape that were presented in an attended spatial position. We manipulated temporal attention by presenting a warning signal before each search display and varying the foreperiod (FP) between the warning signal and the search display in a blocked manner. We observed distinctive ERP effects of both spatial and temporal attention. The amplitudes for the N2pc, SPCN, and P3 were enhanced by spatial attention indicating a processing benefit of relevant stimulus features at the attended side. Temporal attention accelerated stimulus processing; this was indexed by an earlier onset of the N2pc component and a reduction in reaction times to targets. Most importantly, temporal attention did not interact with spatial attention or stimulus features to influence visual processing. Taken together, the results suggest that temporal attention fosters visual perceptual processing in a visual search task independently from spatial attention and feature-based attention; this provides support for the nonspecific enhancement hypothesis of temporal attention. © 2016 Society for Psychophysiological Research.

  16. Spatial and temporal coherence in perceptual binding

    PubMed Central

    Blake, Randolph; Yang, Yuede

    1997-01-01

    Component visual features of objects are registered by distributed patterns of activity among neurons comprising multiple pathways and visual areas. How these distributed patterns of activity give rise to unified representations of objects remains unresolved, although one recent, controversial view posits temporal coherence of neural activity as a binding agent. Motivated by the possible role of temporal coherence in feature binding, we devised a novel psychophysical task that requires the detection of temporal coherence among features comprising complex visual images. Results show that human observers can more easily detect synchronized patterns of temporal contrast modulation within hybrid visual images composed of two components when those components are drawn from the same original picture. Evidently, time-varying changes within spatially coherent features produce more salient neural signals. PMID:9192701

  17. The Retinotopic Organization of Macaque Occipitotemporal Cortex Anterior to V4 and Caudoventral to the Middle Temporal (MT) Cluster

    PubMed Central

    Janssens, Thomas; Orban, Guy A.

    2014-01-01

    The retinotopic organization of macaque occipitotemporal cortex rostral to area V4 and caudorostral to the recently described middle temporal (MT) cluster of the monkey (Kolster et al., 2009) is not well established. The proposed number of areas within this region varies from one to four, underscoring the ambiguity concerning the functional organization in this region of extrastriate cortex. We used phase-encoded retinotopic functional MRI mapping methods to reveal the functional topography of this cortical domain. Polar-angle maps showed one complete hemifield representation bordering area V4 anteriorly, split into dorsal and ventral counterparts corresponding to the lower and upper visual field quadrants, respectively. The location of this hemifield representation corresponds to area V4A. More rostroventrally, we identified three other complete hemifield representations. Two of these correspond to the dorsal and the ventral posterior inferotemporal areas (PITd and PITv, respectively) as identified in the Felleman and Van Essen (1991) scheme. The third representation has been tentatively named dorsal occipitotemporal area (OTd). Areas V4A, PITd, PITv, and OTd share a central visual field representation, similar to the areas constituting the MT cluster. Furthermore, they vary widely in size and represent the complete contralateral visual field. Functionally, these four areas show little motion sensitivity, unlike those of the MT cluster, and two of them, OTd and PITd, displayed pronounced two-dimensional shape sensitivity. In general, these results suggest that retinotopically organized tissue extends farther into rostral occipitotemporal cortex of the monkey than generally assumed. PMID:25080580

  18. Finding and recognizing objects in natural scenes: complementary computations in the dorsal and ventral visual systems

    PubMed Central

    Rolls, Edmund T.; Webb, Tristan J.

    2014-01-01

    Searching for and recognizing objects in complex natural scenes is implemented by multiple saccades until the eyes reach within the reduced receptive field sizes of inferior temporal cortex (IT) neurons. We analyze and model how the dorsal and ventral visual streams both contribute to this. Saliency detection in the dorsal visual system including area LIP is modeled by graph-based visual saliency, and allows the eyes to fixate potential objects within several degrees. Visual information at the fixated location subtending approximately 9° corresponding to the receptive fields of IT neurons is then passed through a four layer hierarchical model of the ventral cortical visual system, VisNet. We show that VisNet can be trained using a synaptic modification rule with a short-term memory trace of recent neuronal activity to capture both the required view and translation invariances to allow in the model approximately 90% correct object recognition for 4 objects shown in any view across a range of 135° anywhere in a scene. The model was able to generalize correctly within the four trained views and the 25 trained translations. This approach analyses the principles by which complementary computations in the dorsal and ventral visual cortical streams enable objects to be located and recognized in complex natural scenes. PMID:25161619

  19. Using resampling to assess reliability of audio-visual survey strategies for marbled murrelets at inland forest sites

    USGS Publications Warehouse

    Jodice, Patrick G.R.; Garman, S.L.; Collopy, Michael W.

    2001-01-01

    Marbled Murrelets (Brachyramphus marmoratus) are threatened seabirds that nest in coastal old-growth coniferous forests throughout much of their breeding range. Currently, observer-based audio-visual surveys are conducted at inland forest sites during the breeding season primarily to determine nesting distribution and breeding status and are being used to estimate temporal or spatial trends in murrelet detections. Our goal was to assess the feasibility of using audio-visual survey data for such monitoring. We used an intensive field-based survey effort to record daily murrelet detections at seven survey stations in the Oregon Coast Range. We then used computer-aided resampling techniques to assess the effectiveness of twelve survey strategies with varying scheduling and a sampling intensity of 4-14 surveys per breeding season to estimate known means and SDs of murrelet detections. Most survey strategies we tested failed to provide estimates of detection means and SDs that were within A?20% of actual means and SDs. Estimates of daily detections were, however, frequently estimated to within A?50% of field data with sampling efforts of 14 days/breeding season. Additional resampling analyses with statistically generated detection data indicated that the temporal variability in detection data had a great effect on the reliability of the mean and SD estimates calculated from the twelve survey strategies, while the value of the mean had little effect. Effectiveness at estimating multi-year trends in detection data was similarly poor, indicating that audio-visual surveys might be reliably used to estimate annual declines in murrelet detections of the order of 50% per year.

  20. Light and dark adaptation of visually perceived eye level controlled by visual pitch.

    PubMed

    Matin, L; Li, W

    1995-01-01

    The pitch of a visual field systematically influences the elevation at which a monocularly viewing subject sets a target so as to appear at visually perceived eye level (VPEL). The deviation of the setting from true eye level average approximately 0.6 times the angle of pitch while viewing a fully illuminated complexly structured visual field and is only slightly less with one or two pitched-from-vertical lines in a dark field (Matin & Li, 1994a). The deviation of VPEL from baseline following 20 min of dark adaptation reaches its full value less than 1 min after the onset of illumination of the pitched visual field and decays exponentially in darkness following 5 min of exposure to visual pitch, either 30 degrees topbackward or 20 degrees topforward. The magnitude of the VPEL deviation measured with the dark-adapted right eye following left-eye exposure to pitch was 85% of the deviation that followed pitch exposure of the right eye itself. Time constants for VPEL decay to the dark baseline were the same for same-eye and cross-adaptation conditions and averaged about 4 min. The time constants for decay during dark adaptation were somewhat smaller, and the change during dark adaptation extended over a 16% smaller range following the viewing of the dim two-line pitched-from-vertical stimulus than following the viewing of the complex field. The temporal course of light and dark adaptation of VPEL is virtually identical to the course of light and dark adaptation of the scotopic luminance threshold following exposure to the same luminance. We suggest that, following rod stimulation along particular retinal orientations by portions of the pitched visual field, the storage of the adaptation process resides in the retinogeniculate system and is manifested in the focal system as a change in luminance threshold and in the ambient system as a change in VPEL. The linear model previously developed to account for VPEL, which was based on the interaction of influences from the pitched visual field and extraretinal influences from the body-referenced mechanism, was employed to incorporate the effects of adaptation. Connections between VPEL adaptation and other cases of perceptual adaptation of visual direction are described.

  1. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    PubMed

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. Representations of temporal information in short-term memory: Are they modality-specific?

    PubMed

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Visual Dysfunction in Posterior Cortical Atrophy

    PubMed Central

    Maia da Silva, Mari N.; Millington, Rebecca S.; Bridge, Holly; James-Galton, Merle; Plant, Gordon T.

    2017-01-01

    Posterior cortical atrophy (PCA) is a syndromic diagnosis. It is characterized by progressive impairment of higher (cortical) visual function with imaging evidence of degeneration affecting the occipital, parietal, and posterior temporal lobes bilaterally. Most cases will prove to have Alzheimer pathology. The aim of this review is to summarize the development of the concept of this disorder since it was first introduced. A critical discussion of the evolving diagnostic criteria is presented and the differential diagnosis with regard to the underlying pathology is reviewed. Emphasis is given to the visual dysfunction that defines the disorder, and the classical deficits, such as simultanagnosia and visual agnosia, as well as the more recently recognized visual field defects, are reviewed, along with the evidence on their neural correlates. The latest developments on the imaging of PCA are summarized, with special attention to its role on the differential diagnosis with related conditions. PMID:28861031

  4. Serial dependence promotes object stability during occlusion

    PubMed Central

    Liberman, Alina; Zhang, Kathy; Whitney, David

    2016-01-01

    Object identities somehow appear stable and continuous over time despite eye movements, disruptions in visibility, and constantly changing visual input. Recent results have demonstrated that the perception of orientation, numerosity, and facial identity is systematically biased (i.e., pulled) toward visual input from the recent past. The spatial region over which current orientations or face identities are pulled by previous orientations or identities, respectively, is known as the continuity field, which is temporally tuned over the past several seconds (Fischer & Whitney, 2014). This perceptual pull could contribute to the visual stability of objects over short time periods, but does it also address how perceptual stability occurs during visual discontinuities? Here, we tested whether the continuity field helps maintain perceived object identity during occlusion. Specifically, we found that the perception of an oriented Gabor that emerged from behind an occluder was significantly pulled toward the random (and unrelated) orientation of the Gabor that was seen entering the occluder. Importantly, this serial dependence was stronger for predictable, continuously moving trajectories, compared to unpredictable ones or static displacements. This result suggests that our visual system takes advantage of expectations about a stable world, helping to maintain perceived object continuity despite interrupted visibility. PMID:28006066

  5. Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays

    NASA Astrophysics Data System (ADS)

    Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko

    The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.

  6. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  7. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  8. Talkin' 'Bout Meta-Generation: ACT UP History and Queer Futurity

    ERIC Educational Resources Information Center

    Emmer, Pascal

    2012-01-01

    The transmission of ACT UP's movement histories is indispensable to the potential for what Jose Esteban Munoz calls "queer futurity," or "a temporal arrangement in which the past is a field of possibility in which subjects can act in the present in the service of a new futurity." Roger Hallas argues that ACT UP's material and visual archive alone…

  9. The Complex Structure of Receptive Fields in the Middle Temporal Area

    PubMed Central

    Richert, Micah; Albright, Thomas D.; Krekelberg, Bart

    2012-01-01

    Neurons in the middle temporal area (MT) are often viewed as motion detectors that prefer a single direction of motion in a single region of space. This assumption plays an important role in our understanding of visual processing, and models of motion processing in particular. We used extracellular recordings in area MT of awake, behaving monkeys (M. mulatta) to test this assumption with a novel reverse correlation approach. Nearly half of the MT neurons in our sample deviated significantly from the classical view. First, in many cells, direction preference changed with the location of the stimulus within the receptive field. Second, the spatial response profile often had multiple peaks with apparent gaps in between. This shows that visual motion analysis in MT has access to motion detectors that are more complex than commonly thought. This complexity could be a mere byproduct of imperfect development, but can also be understood as the natural consequence of the non-linear, recurrent interactions among laterally connected MT neurons. An important direction for future research is to investigate whether these in homogeneities are advantageous, how they can be incorporated into models of motion detection, and whether they can provide quantitative insight into the underlying effective connectivity. PMID:23508640

  10. Understanding human visual systems and its impact on our intelligent instruments

    NASA Astrophysics Data System (ADS)

    Strojnik Scholl, Marija; Páez, Gonzalo; Scholl, Michelle K.

    2013-09-01

    We review the evolution of machine vision and comment on the cross-fertilization from the neural sciences onto flourishing fields of neural processing, parallel processing, and associative memory in optical sciences and computing. Then we examine how the intensive efforts in mapping the human brain have been influenced by concepts in computer sciences, control theory, and electronic circuits. We discuss two neural paths that employ the input from the vision sense to determine the navigational options and object recognition. They are ventral temporal pathway for object recognition (what?) and dorsal parietal pathway for navigation (where?), respectively. We describe the reflexive and conscious decision centers in cerebral cortex involved with visual attention and gaze control. Interestingly, these require return path though the midbrain for ocular muscle control. We find that the cognitive psychologists currently study human brain employing low-spatial-resolution fMRI with temporal response on the order of a second. In recent years, the life scientists have concentrated on insect brains to study neural processes. We discuss how reflexive and conscious gaze-control decisions are made in the frontal eye field and inferior parietal lobe, constituting the fronto-parietal attention network. We note that ethical and experiential learnings impact our conscious decisions.

  11. Visual temporal processing in dyslexia and the magnocellular deficit theory: the need for speed?

    PubMed

    McLean, Gregor M T; Stuart, Geoffrey W; Coltheart, Veronika; Castles, Anne

    2011-12-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore temporal aspects of magnocellular functioning in 40 children with dyslexia and 42 age-matched controls (aged 7-11). The relationship between magnocellular temporal resolution and higher-level aspects of visual temporal processing including inspection time, single and dual-target (attentional blink) RSVP performance, go/no-go reaction time, and rapid naming was also assessed. The Dyslexia group exhibited significant deficits in magnocellular temporal resolution compared with controls, but the two groups did not differ in parvocellular temporal resolution. Despite the significant group differences, associations between magnocellular temporal resolution and reading ability were relatively weak, and links between low-level temporal resolution and reading ability did not appear specific to the magnocellular system. Factor analyses revealed that a collective Perceptual Speed factor, involving both low-level and higher-level visual temporal processing measures, accounted for unique variance in reading ability independently of phonological processing, rapid naming, and general ability.

  12. Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View

    NASA Astrophysics Data System (ADS)

    Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.

    2017-09-01

    Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.

  13. Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense.

    PubMed

    Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor

    2017-05-24

    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.

  14. Pacing Visual Attention: Temporal Structure Effects

    DTIC Science & Technology

    1993-06-01

    of perception and motor action: Ideomotor compatibility and interference in divided attention . Journal of Motor Behavior, 2, (3), 155-162. Kwak, H...1993 Dissertation, Jun 89 - Jun 93 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Pacing Visual Attention : Temporal Structure Effects PE - 62202F 6. AUTHOR(S...that persisting temporal relationships may be an important factor in the external (exogenous) control of visual attention , at least to some extent, was

  15. A processing work-flow for measuring erythrocytes velocity in extended vascular networks from wide field high-resolution optical imaging data.

    PubMed

    Deneux, Thomas; Takerkart, Sylvain; Grinvald, Amiram; Masson, Guillaume S; Vanzetta, Ivo

    2012-02-01

    Comprehensive information on the spatio-temporal dynamics of the vascular response is needed to underpin the signals used in hemodynamics-based functional imaging. It has recently been shown that red blood cells (RBCs) velocity and its changes can be extracted from wide-field optical imaging recordings of intrinsic absorption changes in cortex. Here, we describe a complete processing work-flow for reliable RBC velocity estimation in cortical networks. Several pre-processing steps are implemented: image co-registration, necessary to correct for small movements of the vasculature, semi-automatic image segmentation for fast and reproducible vessel selection, reconstruction of RBC trajectories patterns for each micro-vessel, and spatio-temporal filtering to enhance the desired data characteristics. The main analysis step is composed of two robust algorithms for estimating the RBCs' velocity field. Vessel diameter and its changes are also estimated, as well as local changes in backscattered light intensity. This full processing chain is implemented with a software suite that is freely distributed. The software uses efficient data management for handling the very large data sets obtained with in vivo optical imaging. It offers a complete and user-friendly graphical user interface with visualization tools for displaying and exploring data and results. A full data simulation framework is also provided in order to optimize the performances of the algorithm with respect to several characteristics of the data. We illustrate the performance of our method in three different cases of in vivo data. We first document the massive RBC speed response evoked by a spreading depression in anesthetized rat somato-sensory cortex. Second, we show the velocity response elicited by a visual stimulation in anesthetized cat visual cortex. Finally, we report, for the first time, visually-evoked RBC speed responses in an extended vascular network in awake monkey extrastriate cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. An exploratory study of temporal integration in the peripheral retina of myopes

    NASA Astrophysics Data System (ADS)

    Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.

    2017-08-01

    The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.

  17. Postnatal Development of Intrinsic Horizontal Axons in Macaque Inferior Temporal and Primary Visual Cortices.

    PubMed

    Wang, Quanxin; Tanigawa, Hisashi; Fujita, Ichiro

    2017-04-01

    Two distinct areas along the ventral visual stream of monkeys, the primary visual (V1) and inferior temporal (TE) cortices, exhibit different projection patterns of intrinsic horizontal axons with patchy terminal fields in adult animals. The differences between the patches in these 2 areas may reflect differences in cortical representation and processing of visual information. We studied the postnatal development of patches by injecting an anterograde tracer into TE and V1 in monkeys of various ages. At 1 week of age, labeled patches with distribution patterns reminiscent of those in adults were already present in both areas. The labeling intensity of patches decayed exponentially with projection distance in monkeys of all ages in both areas, but this trend was far less evident in TE. The number and extent of patches gradually decreased with age in V1, but not in TE. In V1, axonal and bouton densities increased postnatally only in patches with short projection distances, whereas in TE this density change occurred in patches with various projection distances. Thus, patches with area-specific distribution patterns are formed early in life, and area-specific postnatal developmental processes shape the connectivity of patches into adulthood. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. The effects of luminance contribution from large fields to chromatic visual evoked potentials.

    PubMed

    Skiba, Rafal M; Duncan, Chad S; Crognale, Michael A

    2014-02-01

    Though useful from a clinical and practical standpoint uniform, large-field chromatic stimuli are likely to contain luminance contributions from retinal inhomogeneities. Such contribution can significantly influence psychophysical thresholds. However, the degree to which small luminance artifacts influence the chromatic VEP has been debated. In particular, claims have been made that band-pass tuning observed in chromatic VEPs result from luminance intrusion. However, there has been no direct evidence presented to support these claims. Recently, large-field isoluminant stimuli have been developed to control for intrusion from retinal inhomogeneities with particular regard to the influence of macular pigment. We report here the application of an improved version of these full-field stimuli to directly test the influence of luminance intrusion on the temporal tuning of the chromatic VEP. Our results show that band-pass tuning persists even when isoluminance is achieved throughout the extent of the stimulus. In addition, small amounts of luminance intrusion affect neither the shape of the temporal tuning function nor the major components of the VEP. These results support the conclusion that the chromatic VEP can depart substantially from threshold psychophysics with regard to temporal tuning and that obtaining a low-pass function is not requisite evidence of selective chromatic activation in the VEP. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Basic quantitative assessment of visual performance in patients with very low vision.

    PubMed

    Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert

    2010-02-01

    A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.

  20. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. RICA: a reliable and image configurable arena for cyborg bumblebee based on CAN bus.

    PubMed

    Gong, Fan; Zheng, Nenggan; Xue, Lei; Xu, Kedi; Zheng, Xiaoxiang

    2014-01-01

    In this paper, we designed a reliable and image configurable flight arena, RICA, for developing cyborg bumblebees. To meet the spatial and temporal requirements of bumblebees, the Controller Area Network (CAN) bus is adopted to interconnect the LED display modules to ensure the reliability and real-time performance of the arena system. Easily-configurable interfaces on a desktop computer implemented by python scripts are provided to transmit the visual patterns to the LED distributor online and configure RICA dynamically. The new arena system will be a power tool to investigate the quantitative relationship between the visual inputs and induced flight behaviors and also will be helpful to the visual-motor research in other related fields.

  2. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    PubMed

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).

  3. The projective field of a retinal amacrine cell

    PubMed Central

    de Vries, Saskia E. J.; Baccus, Stephen A.; Meister, Markus

    2011-01-01

    In sensory systems, neurons are generally characterized by their receptive field, namely the sensitivity to activity patterns at the circuit's input. To assess the neuron's role in the system, one must also know its projective field, namely the spatio-temporal effects the neuron exerts on all the circuit's outputs. We studied both the receptive and projective fields of an amacrine interneuron in the salamander retina. This amacrine type has a sustained OFF response with a small receptive field, but its output projects over a much larger region. Unlike other amacrines, this type is remarkably promiscuous and affects nearly every ganglion cell within reach of its dendrites. Its activity modulates the sensitivity of visual responses in ganglion cells, while leaving their kinetics unchanged. The projective field displays a center-surround structure: Depolarizing a single amacrine suppresses the visual sensitivity of ganglion cells nearby, and enhances it at greater distances. This change in sign is seen even within the receptive field of one ganglion cell; thus the modulation occurs presynaptically on bipolar cell terminals, most likely via GABAB receptors. Such an antagonistic projective field could contribute to the retina's mechanisms for predictive coding. PMID:21653863

  4. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  5. A neural model of motion processing and visual navigation by cortical area MST.

    PubMed

    Grossberg, S; Mingolla, E; Pack, C

    1999-12-01

    Cells in the dorsal medial superior temporal cortex (MSTd) process optic flow generated by self-motion during visually guided navigation. A neural model shows how interactions between well-known neural mechanisms (log polar cortical magnification, Gaussian motion-sensitive receptive fields, spatial pooling of motion-sensitive signals and subtractive extraretinal eye movement signals) lead to emergent properties that quantitatively simulate neurophysiological data about MSTd cell properties and psychophysical data about human navigation. Model cells match MSTd neuron responses to optic flow stimuli placed in different parts of the visual field, including position invariance, tuning curves, preferred spiral directions, direction reversals, average response curves and preferred locations for stimulus motion centers. The model shows how the preferred motion direction of the most active MSTd cells can explain human judgments of self-motion direction (heading), without using complex heading templates. The model explains when extraretinal eye movement signals are needed for accurate heading perception, and when retinal input is sufficient, and how heading judgments depend on scene layouts and rotation rates.

  6. Differences in visual vs. verbal memory impairments as a result of focal temporal lobe damage in patients with traumatic brain injury.

    PubMed

    Ariza, Mar; Pueyo, Roser; Junqué, Carme; Mataró, María; Poca, María Antonia; Mena, Maria Pau; Sahuquillo, Juan

    2006-09-01

    The aim of the present study was to determine whether the type of lesion in a sample of moderate and severe traumatic brain injury (TBI) was related to material-specific memory impairment. Fifty-nine patients with TBI were classified into three groups according to whether the site of the lesion was right temporal, left temporal or diffuse. Six-months post-injury, visual (Warrington's Facial Recognition Memory Test and Rey's Complex Figure Test) and verbal (Rey's Auditory Verbal Learning Test) memories were assessed. Visual memory deficits assessed by facial memory were associated with right temporal lobe lesion, whereas verbal memory performance assessed with a list of words was related to left temporal lobe lesion. The group with diffuse injury showed both verbal and visual memory impairment. These results suggest a material-specific memory impairment in moderate and severe TBI after focal temporal lesions and a non-specific memory impairment after diffuse damage.

  7. Visual motion integration by neurons in the middle temporal area of a New World monkey, the marmoset

    PubMed Central

    Solomon, Selina S; Tailby, Chris; Gharaei, Saba; Camp, Aaron J; Bourne, James A; Solomon, Samuel G

    2011-01-01

    Abstract The middle temporal area (MT/V5) is an anatomically distinct region of primate visual cortex that is specialized for the processing of image motion. It is generally thought that some neurons in area MT are capable of signalling the motion of complex patterns, but this has only been established in the macaque monkey. We made extracellular recordings from single units in area MT of anaesthetized marmosets, a New World monkey. We show through quantitative analyses that some neurons (35 of 185; 19%) are capable of signalling pattern motion (‘pattern cells’). Across several dimensions, the visual response of pattern cells in marmosets is indistinguishable from that of pattern cells in macaques. Other neurons respond to the motion of oriented contours in a pattern (‘component cells’) or show intermediate properties. In addition, we encountered a subset of neurons (22 of 185; 12%) insensitive to sinusoidal gratings but very responsive to plaids and other two-dimensional patterns and otherwise indistinguishable from pattern cells. We compared the response of each cell class to drifting gratings and dot fields. In pattern cells, directional selectivity was similar for gratings and dot fields; in component cells, directional selectivity was weaker for dot fields than gratings. Pattern cells were more likely to have stronger suppressive surrounds, prefer lower spatial frequencies and prefer higher speeds than component cells. We conclude that pattern motion sensitivity is a feature of some neurons in area MT of both New and Old World monkeys, suggesting that this functional property is an important stage in motion analysis and is likely to be conserved in humans. PMID:21946851

  8. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  9. Change in peripheral refraction and curvature of field of the human eye with accommodation

    NASA Astrophysics Data System (ADS)

    Ho, Arthur; Zimmermann, Frederik; Whatham, Andrew; Martinez, Aldo; Delgado, Stephanie; Lazon de la Jara, Percy; Sankaridurg, Padmaja

    2009-02-01

    Recent research showed that the peripheral refractive state is a sufficient stimulus for myopia progression. This finding led to the suggestion that devices that control peripheral refraction may be efficacious in controlling myopia progression. This study aims to understand whether the optical effect of such devices may be affected by near focus. In particular, we seek to understand the influence of accommodation on peripheral refraction and curvature of field of the eye. Refraction was measured in twenty young subjects using an autorefractor at 0° (i.e. along visual axis), and 20°, 30° and 40° field angles both nasal and temporal to the visual axis. All measurements were conducted at 2.5 m, 40 cm and 30 cm viewing distances. Refractive errors were corrected using a soft contact lens during all measurements. As field angle increased, refraction became less hyperopic. Peripheral refraction also became less hyperopic at nearer viewing distances (i.e. with increasing accommodation). Astigmatism (J180) increased with field angle as well as with accommodation. Adopting a third-order aberration theory approach, the position of the Petzval surface relative to the retinal surface was estimated by considering the relative peripheral refractive error (RPRE) and J180 terms of peripheral refraction. Results for the estimated dioptric position of the Petzval surface relative to the retina showed substantial asymmetry. While temporal field tended to agree with theoretical predictions, nasal response departed dramatically from the model eye predictions. With increasing accommodation, peripheral refraction becomes less hyperopic while the Petzval surface showed asymmetry in its change in position. The change in the optical components (i.e. cornea and/or lens as opposed to retinal shape or position) is implicated as at least one of the contributors of this shift in peripheral refraction during accommodation.

  10. Negative dysphotopsia: Causes and rationale for prevention and treatment.

    PubMed

    Holladay, Jack T; Simpson, Michael J

    2017-02-01

    To determine the cause of negative dysphotopsia using standard ray-tracing techniques and identify the primary and secondary causative factors. Department of Ophthalmology, Baylor College of Medicine, Houston, Texas, USA. Experimental study. Zemax ray-tracing software was used to evaluate pseudophakic and phakic eye models to show the location of retinal field images from various visual field objects. Phakic retinal field angles (RFAs) were used as a reference for the perceived field locations for retinal images in pseudophakic eyes. In a nominal acrylic pseudophakic eye model with a 2.5 mm diameter pupil, the maximum RFA from rays refracted by the intraocular lens (IOL) was 85.7 degrees and the minimum RFA for rays missing the optic of the IOL was 88.3 degrees, leaving a dark gap (shadow) of 2.6 degrees in the extreme temporal field. The width of the shadow was more prominent for a smaller pupil, a larger angle kappa, an equi-biconvex or plano-convex IOL shape, and a smaller axial distance from iris to IOL and with the anterior capsule overlying the nasal IOL. Secondary factors included IOL edge design, material, diameter, decentration, tilt, and aspheric surfaces. Standard ray-tracing techniques showed that a shadow is present when there is a gap between the retinal images formed by rays missing the optic of the IOL and rays refracted by the IOL. Primary and secondary factors independently affected the width and location of the gap (or overlap). The ray tracing also showed a constriction and double retinal imaging in the extreme temporal visual field. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  11. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  12. More is still not better: testing the perturbation model of temporal reference memory across different modalities and tasks.

    PubMed

    Ogden, Ruth S; Jones, Luke A

    2009-05-01

    The ability of the perturbation model (Jones & Wearden, 2003) to account for reference memory function in a visual temporal generalization task and auditory and visual reproduction tasks was examined. In all tasks the number of presentations of the standard was manipulated (1, 3, or 5), and its effect on performance was compared. In visual temporal generalization the number of presentations of the standard did not affect the number of times the standard was correctly identified, nor did it affect the overall temporal generalization gradient. In auditory reproduction there was no effect of the number of times the standard was presented on mean reproductions. In visual reproduction mean reproductions were shorter when the standard was only presented once; however, this effect was reduced when a visual cue was provided before the first presentation of the standard. Whilst the results of all experiments are best accounted for by the perturbation model there appears to be some attentional benefit to multiple presentations of the standard in visual reproduction.

  13. Assessing the effect of physical differences in the articulation of consonants and vowels on audiovisual temporal perception

    PubMed Central

    Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles

    2012-01-01

    We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756

  14. Gender effect in human brain responses to bottom-up and top-down attention using the EEG 3D-Vector Field Tomography.

    PubMed

    Kosmidou, Vasiliki E; Adam, Aikaterini; Papadaniil, Chrysa D; Tsolaki, Magda; Hadjileontiadis, Leontios J; Kompatsiaris, Ioannis

    2015-01-01

    The effect of gender in rapidly allocating attention to objects, features or locations, as reflected in brain activity, is examined in this study. A visual-attention task, consisting of bottom-up (visual pop-out) and top-down (visual search) conditions during stimuli of four triangles, i.e., a target and three distractors, was engaged. In pop-out condition, both color and orientation of the distractors differed from target, while in search condition they differed only in orientation. During the task, high-density EEG (256 channels) data were recorded and analyzed by means of behavioral, event-related potentials, i.e., the P300 component and brain source localization analysis using 3D-Vector Field Tomography (3D-VFT). Twenty subjects (half female; 32±4.7 years old) participated in the experiments, performing 60 trials for each condition. Behavioral analysis revealed that both female and male outperformed in the pop-out condition compared to the search one, with respect to accuracy and reaction time, whereas no gender-related statistical significant differences were found. Nevertheless, in the search condition, higher P300 amplitudes were detected for females compared to males (p <; 7 · 10(-3)). Moreover, the findings suggested that the maximum activation in females was located mainly in the left inferior frontal and superior temporal gyri, whereas in males it was found in the right inferior frontal and superior temporal gyri. Overall, the experimental results show that visual attention depends on contributions from different brain lateralization linked to gender, posing important implications in studying developmental disorders, characterized by gender differences.

  15. Visual development in primates: Neural mechanisms and critical periods

    PubMed Central

    Kiorpes, Lynne

    2015-01-01

    Despite many decades of research into the development of visual cortex, it remains unclear what neural processes set limitations on the development of visual function and define its vulnerability to abnormal visual experience. This selected review examines the development of visual function and its neural correlates, and highlights the fact that in most cases receptive field properties of infant neurons are substantially more mature than infant visual function. One exception is temporal resolution, which can be accounted for by resolution of neurons at the level of the LGN. In terms of spatial vision, properties of single neurons alone are not sufficient to account for visual development. Different visual functions develop over different time courses. Their onset may be limited by the existence of neural response properties that support a given perceptual ability, but the subsequent time course of maturation to adult levels remains unexplained. Several examples are offered suggesting that taking account of weak signaling by infant neurons, correlated firing, and pooled responses of populations of neurons brings us closer to an understanding of the relationship between neural and behavioral development. PMID:25649764

  16. Intracranial Cortical Responses during Visual–Tactile Integration in Humans

    PubMed Central

    Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric

    2014-01-01

    Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279

  17. Temporal Statistics of Natural Image Sequences Generated by Movements with Insect Flight Characteristics

    PubMed Central

    Schwegmann, Alexander; Lindemann, Jens Peter; Egelhaaf, Martin

    2014-01-01

    Many flying insects, such as flies, wasps and bees, pursue a saccadic flight and gaze strategy. This behavioral strategy is thought to separate the translational and rotational components of self-motion and, thereby, to reduce the computational efforts to extract information about the environment from the retinal image flow. Because of the distinguishing dynamic features of this active flight and gaze strategy of insects, the present study analyzes systematically the spatiotemporal statistics of image sequences generated during saccades and intersaccadic intervals in cluttered natural environments. We show that, in general, rotational movements with saccade-like dynamics elicit fluctuations and overall changes in brightness, contrast and spatial frequency of up to two orders of magnitude larger than translational movements at velocities that are characteristic of insects. Distinct changes in image parameters during translations are only caused by nearby objects. Image analysis based on larger patches in the visual field reveals smaller fluctuations in brightness and spatial frequency composition compared to small patches. The temporal structure and extent of these changes in image parameters define the temporal constraints imposed on signal processing performed by the insect visual system under behavioral conditions in natural environments. PMID:25340761

  18. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  19. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  20. Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.

    PubMed

    Wiemers, Michael; Fischer, Martin H

    2016-01-01

    Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.

  1. Spatial and Temporal Signatures of Flux Transfer Events in Global Simulations of Magnetopause Dynamics

    NASA Technical Reports Server (NTRS)

    Kuznetsova, Maria M.; Sibeck, David Gary; Hesse, Michael; Berrios, David; Rastaetter, Lutz; Toth, Gabor; Gombosi, Tamas I.

    2011-01-01

    Flux transfer events (FTEs) were originally identified by transient bipolar variations of the magnetic field component normal to the nominal magnetopause centered on enhancements in the total magnetic field strength. Recent Cluster and THEMIS multi-point measurements provided a wide range of signatures that are interpreted as evidence for FTE passage (e.g., crater FTE's, traveling magnetic erosion regions). We use the global magnetohydrodynamic (MHD) code BATS-R-US developed at the University of Michigan to model the global three-dimensional structure and temporal evolution of FTEs during multi-spacecraft magnetopause crossing events. Comparison of observed and simulated signatures and sensitivity analysis of the results to the probe location will be presented. We will demonstrate a variety of observable signatures in magnetic field profile that depend on space probe location with respect to the FTE passage. The global structure of FTEs will be illustrated using advanced visualization tools developed at the Community Coordinated Modeling Center

  2. Temporal and Spatial Predictability of an Irrelevant Event Differently Affect Detection and Memory of Items in a Visual Sequence

    PubMed Central

    Ohyama, Junji; Watanabe, Katsumi

    2016-01-01

    We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images. PMID:26869966

  3. Temporal and Spatial Predictability of an Irrelevant Event Differently Affect Detection and Memory of Items in a Visual Sequence.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2016-01-01

    We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images.

  4. In Vivo Dark-Field Radiography for Early Diagnosis and Staging of Pulmonary Emphysema.

    PubMed

    Hellbach, Katharina; Yaroshenko, Andre; Meinel, Felix G; Yildirim, Ali Ö; Conlon, Thomas M; Bech, Martin; Mueller, Mark; Velroyen, Astrid; Notohamiprodjo, Mike; Bamberg, Fabian; Auweter, Sigrid; Reiser, Maximilian; Eickelberg, Oliver; Pfeiffer, Franz

    2015-07-01

    The aim of this study was to evaluate the suitability of in vivo x-ray dark-field radiography for early-stage diagnosis of pulmonary emphysema in mice. Furthermore, we aimed to analyze how the dark-field signal correlates with morphological changes of lung architecture at distinct stages of emphysema. Female 8- to 10-week-old C57Bl/6N mice were used throughout all experiments. Pulmonary emphysema was induced by orotracheal injection of porcine pancreatic elastase (80-U/kg body weight) (n = 30). Control mice (n = 11) received orotracheal injection of phosphate-buffered saline. To monitor the temporal patterns of emphysema development over time, the mice were imaged 7, 14, or 21 days after the application of elastase or phosphate-buffered saline. X-ray transmission and dark-field images were acquired with a prototype grating-based small-animal scanner. In vivo pulmonary function tests were performed before killing the animals. In addition, lungs were obtained for detailed histopathological analysis, including mean cord length (MCL) quantification as a parameter for the assessment of emphysema. Three blinded readers, all of them experienced radiologists and familiar with dark-field imaging, were asked to grade the severity of emphysema for both dark-field and transmission images. Histopathology and MCL quantification confirmed the introduction of different stages of emphysema, which could be clearly visualized and differentiated on the dark-field radiograms, whereas early stages were not detected on transmission images. The correlation between MCL and dark-field signal intensities (r = 0.85) was significantly higher than the correlation between MCL and transmission signal intensities (r = 0.37). The readers' visual ratings for dark-field images correlated significantly better with MCL (r = 0.85) than visual ratings for transmission images (r = 0.36). Interreader agreement and the diagnostic accuracy of both quantitative and visual assessment were significantly higher for dark-field imaging than those for conventional transmission images. X-ray dark-field radiography can reliably visualize different stages of emphysema in vivo and demonstrates significantly higher diagnostic accuracy for early stages of emphysema than conventional attenuation-based radiography.

  5. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    PubMed Central

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  6. Masking with faces in central visual field under a variety of temporal schedules.

    PubMed

    Daar, Marwan; Wilson, Hugh R

    2015-11-01

    With a few exceptions, previous studies have explored masking using either a backward mask or a common onset trailing mask, but not both. In a series of experiments, we demonstrate the use of faces in central visual field as a viable method to study the relationship between these two types of mask schedule. We tested observers in a two alternative forced choice face identification task, where both target and mask comprised synthetic faces, and show that a simple model can successfully predict masking across a variety of masking schedules ranging from a backward mask to a common onset trailing mask and a number of intermediate variations. Our data are well accounted for by a window of sensitivity to mask interference that is centered at around 100 ms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Effects of local myopic defocus on refractive development in monkeys.

    PubMed

    Smith, Earl L; Hung, Li-Fang; Huang, Juan; Arumugam, Baskar

    2013-11-01

    Visual signals that produce myopia are mediated by local, regionally selective mechanisms. However, little is known about spatial integration for signals that slow eye growth. The purpose of this study was to determine whether the effects of myopic defocus are integrated in a local manner in primates. Beginning at 24 ± 2 days of age, seven rhesus monkeys were reared with monocular spectacles that produced 3 diopters (D) of relative myopic defocus in the nasal visual field of the treated eye but allowed unrestricted vision in the temporal field (NF monkeys). Seven monkeys were reared with monocular +3 D lenses that produced relative myopic defocus across the entire field of view (FF monkeys). Comparison data from previous studies were available for 11 control monkeys, 8 monkeys that experienced 3 D of hyperopic defocus in the nasal field, and 6 monkeys exposed to 3 D of hyperopic defocus across the entire field. Refractive development, corneal power, and axial dimensions were assessed at 2- to 4-week intervals using retinoscopy, keratometry, and ultrasonography, respectively. Eye shape was assessed using magnetic resonance imaging. In response to full-field myopic defocus, the FF monkeys developed compensating hyperopic anisometropia, the degree of which was relatively constant across the horizontal meridian. In contrast, the NF monkeys exhibited compensating hyperopic changes in refractive error that were greatest in the nasal visual field. The changes in the pattern of peripheral refractions in the NF monkeys reflected interocular differences in vitreous chamber shape. As with form deprivation and hyperopic defocus, the effects of myopic defocus are mediated by mechanisms that integrate visual signals in a local, regionally selective manner in primates. These results are in agreement with the hypothesis that peripheral vision can influence eye shape and potentially central refractive error in a manner that is independent of central visual experience.

  8. Not one extrastriate body area: Using anatomical landmarks, hMT+, and visual field maps to parcellate limb-selective activations in human lateral occipitotemporal cortex

    PubMed Central

    Weiner, Kevin S.; Grill-Spector, Kalanit

    2011-01-01

    The prevailing view of human lateral occipitotemporal cortex (LOTC) organization suggests a single area selective for images of the human body (extrastriate body area, EBA) that highly overlaps with the human motion-selective complex (hMT+). Using functional magnetic resonance imaging with higher resolution (1.5mm voxels) than past studies (3–4mm voxels), we examined the fine-scale spatial organization of these activations relative to each other, as well as to visual field maps in LOTC. Rather than one contiguous EBA highly overlapping hMT+, results indicate three limb-selective activations organized in a crescent surrounding hMT+: (1) an activation posterior to hMT+ on the lateral occipital sulcus/middle occipital gyrus (LOS/MOG) overlapping the lower vertical meridian shared between visual field maps LO-2 and TO-1, (2) an activation anterior to hMT+ on the middle temporal gyrus (MTG) consistently overlapping the lower vertical meridian of TO-2 and extending outside presently defined visual field maps, and (3) an activation inferior to hMT+ on the inferotemporal gyrus (ITG) overlapping the parafoveal representation of the TO cluster. This crescent organization of limb-selective activations surrounding hMT+ is reproducible over a span of three years and is consistent across different image types used for localization. Further, these regions exhibit differential position properties: preference for contralateral image presentation decreases and preference for foveal presentation increases from the limb-selective LOS to the MTG. Finally, the relationship between limb-selective activations and visual field maps extends to the dorsal stream where a posterior IPS activation overlaps V7. Overall, our measurements demonstrate a series of LOTC limb-selective activations that 1) have separate anatomical and functional boundaries, 2) overlap distinct visual field maps, and 3) illustrate differential position properties. These findings indicate that category selectivity alone is an insufficient organization principle for defining brain areas. Instead, multiple properties are necessary in order to parcellate and understand the functional organization of high-level visual cortex. PMID:21439386

  9. Being First Matters: Topographical Representational Similarity Analysis of ERP Signals Reveals Separate Networks for Audiovisual Temporal Binding Depending on the Leading Sense

    PubMed Central

    2017-01-01

    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537

  10. Hemispheric asymmetries of a motor memory in a recognition test after learning a movement sequence.

    PubMed

    Leinen, Peter; Panzer, Stefan; Shea, Charles H

    2016-11-01

    Two experiments utilizing a spatial-temporal movement sequence were designed to determine if the memory of the sequence is lateralized in the left or right hemisphere. In Experiment 1, dominant right-handers were randomly assigned to one of two acquisition groups: a left-hand starter and a right-hand starter group. After an acquisition phase, reaction time (RT) was measured in a recognition test by providing the learned sequential pattern in the left or right visual half-field for 150ms. In a retention test and two transfer tests the dominant coordinate system for sequence production was evaluated. In Experiment 2 dominant left-handers and dominant right-handers had to acquire the sequence with their dominant limb. The results of Experiment 1 indicated that RT was significantly shorter when the acquired sequence was provided in the right visual field during the recognition test. The same results occurred in Experiment 2 for dominant right-handers and left-handers. These results indicated a right visual field left hemisphere advantage in the recognition test for the practiced stimulus for dominant left and right-handers, when the task was practiced with the dominant limb. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Electrophysiological indices of surround suppression in humans

    PubMed Central

    Vanegas, M. Isabel; Blangero, Annabelle

    2014-01-01

    Surround suppression is a well-known example of contextual interaction in visual cortical neurophysiology, whereby the neural response to a stimulus presented within a neuron's classical receptive field is suppressed by surrounding stimuli. Human psychophysical reports present an obvious analog to the effects seen at the single-neuron level: stimuli are perceived as lower-contrast when embedded in a surround. Here we report on a visual paradigm that provides relatively direct, straightforward indices of surround suppression in human electrophysiology, enabling us to reproduce several well-known neurophysiological and psychophysical effects, and to conduct new analyses of temporal trends and retinal location effects. Steady-state visual evoked potentials (SSVEP) elicited by flickering “foreground” stimuli were measured in the context of various static surround patterns. Early visual cortex geometry and retinotopic organization were exploited to enhance SSVEP amplitude. The foreground response was strongly suppressed as a monotonic function of surround contrast. Furthermore, suppression was stronger for surrounds of matching orientation than orthogonally-oriented ones, and stronger at peripheral than foveal locations. These patterns were reproduced in psychophysical reports of perceived contrast, and peripheral electrophysiological suppression effects correlated with psychophysical effects across subjects. Temporal analysis of SSVEP amplitude revealed short-term contrast adaptation effects that caused the foreground signal to either fall or grow over time, depending on the relative contrast of the surround, consistent with stronger adaptation of the suppressive drive. This electrophysiology paradigm has clinical potential in indexing not just visual deficits but possibly gain control deficits expressed more widely in the disordered brain. PMID:25411464

  12. Differential priming effects of color-opponent subliminal stimulation on visual magnetic responses.

    PubMed

    Hoshiyama, Minoru; Kakigi, Ryusuke; Takeshima, Yasuyuki; Miki, Kensaku; Watanabe, Shoko

    2006-10-01

    We investigated the effects of subliminal stimulation on visible stimulation to demonstrate the priority of facial discrimination processing, using a unique, indiscernible, color-opponent subliminal (COS) stimulation. We recorded event-related magnetic cortical fields (ERF) by magnetoencephalography (MEG) after the presentation of a face or flower stimulus with COS conditioning using a face, flower, random pattern, and blank. The COS stimulation enhanced the response to visible stimulation when the figure in the COS stimulation was identical to the target visible stimulus, but more so for the face than for the flower stimulus. The ERF component modulated by the COS stimulation was estimated to be located in the ventral temporal cortex. We speculated that the enhancement was caused by an interaction of the responses after subthreshold stimulation by the COS stimulation and the suprathreshold stimulation after target stimulation, such as in the processing for categorization or discrimination. We also speculated that the face was processed with priority at the level of the ventral temporal cortex during visual processing outside of consciousness.

  13. Ambiguous Figures – What Happens in the Brain When Perception Changes But Not the Stimulus

    PubMed Central

    Kornmeier, Jürgen; Bach, Michael

    2011-01-01

    During observation of ambiguous figures our perception reverses spontaneously although the visual information stays unchanged. Research on this phenomenon so far suffered from the difficulty to determine the instant of the endogenous reversals with sufficient temporal precision. A novel experimental paradigm with discontinuous stimulus presentation improved on previous temporal estimates of the reversal event by a factor of three. It revealed that disambiguation of ambiguous visual information takes roughly 50 ms or two loops of recurrent neural activity. Further, the decision about the perceptual outcome has taken place at least 340 ms before the observer is able to indicate the consciously perceived reversal manually. We provide a short review about physiological studies on multistable perception with a focus on electrophysiological data. We further present a new perspective on multistable perception that can easily integrate previous apparently contradicting explanatory approaches. Finally we propose possible extensions toward other research fields where ambiguous figure perception may be useful as an investigative tool. PMID:22461773

  14. Early, but not late visual distractors affect movement synchronization to a temporal-spatial visual cue.

    PubMed

    Booth, Ashley J; Elliott, Mark T

    2015-01-01

    The ease of synchronizing movements to a rhythmic cue is dependent on the modality of the cue presentation: timing accuracy is much higher when synchronizing with discrete auditory rhythms than an equivalent visual stimulus presented through flashes. However, timing accuracy is improved if the visual cue presents spatial as well as temporal information (e.g., a dot following an oscillatory trajectory). Similarly, when synchronizing with an auditory target metronome in the presence of a second visual distracting metronome, the distraction is stronger when the visual cue contains spatial-temporal information rather than temporal only. The present study investigates individuals' ability to synchronize movements to a temporal-spatial visual cue in the presence of same-modality temporal-spatial distractors. Moreover, we investigated how increasing the number of distractor stimuli impacted on maintaining synchrony with the target cue. Participants made oscillatory vertical arm movements in time with a vertically oscillating white target dot centered on a large projection screen. The target dot was surrounded by 2, 8, or 14 distractor dots, which had an identical trajectory to the target but at a phase lead or lag of 0, 100, or 200 ms. We found participants' timing performance was only affected in the phase-lead conditions and when there were large numbers of distractors present (8 and 14). This asymmetry suggests participants still rely on salient events in the stimulus trajectory to synchronize movements. Subsequently, distractions occurring in the window of attention surrounding those events have the maximum impact on timing performance.

  15. Precortical dysfunction of spatial and temporal visual processing in migraine.

    PubMed Central

    Coleston, D M; Chronicle, E; Ruddock, K H; Kennard, C

    1994-01-01

    This paper examines spatial and temporal processing in migraineurs (diagnosed according to International Headache Society criteria, 1988), using psychophysical tests that measure spatial and temporal responses. These tests are considered to specifically assess precortical mechanisms. Results suggest precortical dysfunction for processing of spatial and temporal visual stimuli in 11 migraineurs with visual aura and 13 migraineurs without aura; the two groups could not be distinguished. As precortical dysfunction seems to be common to both groups of patients, it is suggested that symptoms that are experienced by both groups, such as blurring of vision and photophobia, may have their basis at a precortical level. PMID:7931382

  16. Receptive fields for smooth pursuit eye movements and motion perception.

    PubMed

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths.

    PubMed

    Stöckl, A L; O'Carroll, D; Warrant, E J

    2017-06-28

    To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta , and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila , but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies. © 2017 The Author(s).

  18. Interactive and coordinated visualization approaches for biological data analysis.

    PubMed

    Cruz, António; Arrais, Joel P; Machado, Penousal

    2018-03-26

    The field of computational biology has become largely dependent on data visualization tools to analyze the increasing quantities of data gathered through the use of new and growing technologies. Aside from the volume, which often results in large amounts of noise and complex relationships with no clear structure, the visualization of biological data sets is hindered by their heterogeneity, as data are obtained from different sources and contain a wide variety of attributes, including spatial and temporal information. This requires visualization approaches that are able to not only represent various data structures simultaneously but also provide exploratory methods that allow the identification of meaningful relationships that would not be perceptible through data analysis algorithms alone. In this article, we present a survey of visualization approaches applied to the analysis of biological data. We focus on graph-based visualizations and tools that use coordinated multiple views to represent high-dimensional multivariate data, in particular time series gene expression, protein-protein interaction networks and biological pathways. We then discuss how these methods can be used to help solve the current challenges surrounding the visualization of complex biological data sets.

  19. VAUD: A Visual Analysis Approach for Exploring Spatio-Temporal Urban Data.

    PubMed

    Chen, Wei; Huang, Zhaosong; Wu, Feiran; Zhu, Minfeng; Guan, Huihua; Maciejewski, Ross

    2017-10-02

    Urban data is massive, heterogeneous, and spatio-temporal, posing a substantial challenge for visualization and analysis. In this paper, we design and implement a novel visual analytics approach, Visual Analyzer for Urban Data (VAUD), that supports the visualization, querying, and exploration of urban data. Our approach allows for cross-domain correlation from multiple data sources by leveraging spatial-temporal and social inter-connectedness features. Through our approach, the analyst is able to select, filter, aggregate across multiple data sources and extract information that would be hidden to a single data subset. To illustrate the effectiveness of our approach, we provide case studies on a real urban dataset that contains the cyber-, physical-, and socialinformation of 14 million citizens over 22 days.

  20. Implementation of visual data mining for unsteady blood flow field in an aortic aneurysm.

    PubMed

    Morizawa, Seiichiro; Shimoyama, Koji; Obayashi, Shigeru; Funamoto, Kenichi; Hayase, Toshiyuki

    2011-12-01

    This study was performed to determine the relations between the features of wall shear stress and aneurysm rupture. For this purpose, visual data mining was performed in unsteady blood flow simulation data for an aortic aneurysm. The time-series data of wall shear stress given at each grid point were converted to spatial and temporal indices, and the grid points were sorted using a self-organizing map based on the similarity of these indices. Next, the results of cluster analysis were mapped onto the real space of the aortic aneurysm to specify the regions that may lead to aneurysm rupture. With reference to previous reports regarding aneurysm rupture, the visual data mining suggested specific hemodynamic features that cause aneurysm rupture. GRAPHICAL ABSTRACT:

  1. Introducing the VISAGE project - Visualization for Integrated Satellite, Airborne, and Ground-based data Exploration

    NASA Astrophysics Data System (ADS)

    Gatlin, P. N.; Conover, H.; Berendes, T.; Maskey, M.; Naeger, A. R.; Wingo, S. M.

    2017-12-01

    A key component of NASA's Earth observation system is its field experiments, for intensive observation of particular weather phenomena, or for ground validation of satellite observations. These experiments collect data from a wide variety of airborne and ground-based instruments, on different spatial and temporal scales, often in unique formats. The field data are often used with high volume satellite observations that have very different spatial and temporal coverage. The challenges inherent in working with such diverse datasets make it difficult for scientists to rapidly collect and analyze the data for physical process studies and validation of satellite algorithms. The newly-funded VISAGE project will address these issues by combining and extending nascent efforts to provide on-line data fusion, exploration, analysis and delivery capabilities. A key building block is the Field Campaign Explorer (FCX), which allows users to examine data collected during field campaigns and simplifies data acquisition for event-based research. VISAGE will extend FCX's capabilities beyond interactive visualization and exploration of coincident datasets, to provide interrogation of data values and basic analyses such as ratios and differences between data fields. The project will also incorporate new, higher level fused and aggregated analysis products from the System for Integrating Multi-platform data to Build the Atmospheric column (SIMBA), which combines satellite and ground-based observations into a common gridded atmospheric column data product; and the Validation Network (VN), which compiles a nationwide database of coincident ground- and satellite-based radar measurements of precipitation for larger scale scientific analysis. The VISAGE proof-of-concept will target "golden cases" from Global Precipitation Measurement Ground Validation campaigns. This presentation will introduce the VISAGE project, initial accomplishments and near term plans.

  2. Learning of goal-relevant and -irrelevant complex visual sequences in human V1.

    PubMed

    Rosenthal, Clive R; Mallik, Indira; Caballero-Gaudes, Cesar; Sereno, Martin I; Soto, David

    2018-06-12

    Learning and memory are supported by a network involving the medial temporal lobe and linked neocortical regions. Emerging evidence indicates that primary visual cortex (i.e., V1) may contribute to recognition memory, but this has been tested only with a single visuospatial sequence as the target memorandum. The present study used functional magnetic resonance imaging to investigate whether human V1 can support the learning of multiple, concurrent complex visual sequences involving discontinous (second-order) associations. Two peripheral, goal-irrelevant but structured sequences of orientated gratings appeared simultaneously in fixed locations of the right and left visual fields alongside a central, goal-relevant sequence that was in the focus of spatial attention. Pseudorandom sequences were introduced at multiple intervals during the presentation of the three structured visual sequences to provide an online measure of sequence-specific knowledge at each retinotopic location. We found that a network involving the precuneus and V1 was involved in learning the structured sequence presented at central fixation, whereas right V1 was modulated by repeated exposure to the concurrent structured sequence presented in the left visual field. The same result was not found in left V1. These results indicate for the first time that human V1 can support the learning of multiple concurrent sequences involving complex discontinuous inter-item associations, even peripheral sequences that are goal-irrelevant. Copyright © 2018. Published by Elsevier Inc.

  3. Storyline Visualizations of Eye Tracking of Movie Viewing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.

    Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.

  4. Maturation of Visual and Auditory Temporal Processing in School-Aged Children

    ERIC Educational Resources Information Center

    Dawes, Piers; Bishop, Dorothy V. M.

    2008-01-01

    Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…

  5. a Walk Through Earth's Time

    NASA Astrophysics Data System (ADS)

    Turrin, B. D.; Turrin, M.

    2012-12-01

    After "What is this rock?" the most common questions that is asked of Geologists is "How old is this rock/fossil?" For geologists considering ages back to millions of years is routine. Sorting and cataloguing events into temporal sequences is a natural tendency for all humans. In fact, it is an everyday activity for humans, i.e., keeping track of birthdays, anniversaries, appointments, meetings, AGU abstract deadlines etc… However, the time frames that are most familiar to the non scientist (seconds, minutes, hours, days, years) generally extend to only a few decades or at most centuries. Yet the vast length of time covered by Earth's history, 4.56 billion years, greatly exceeds these timeframes and thus is commonly referred to as "Deep Time". This is a challenging concept for most students to comprehend as it involves temporal and abstract thinking, yet it is key to their successful understanding of numerous geologic principles. We have developed an outdoor learning activity for general Introductory Earth Science courses that incorporates several scientific and geologic concepts such as: linear distance or stratigraphic thickness representing time, learning about major events in Earth's history and locating them in a scaled temporal framework, field mapping, abstract thinking, scaling and dimensional analysis, and the principles of radio isotopic dating. The only supplies needed are readily available in local hardware stores i.e. a 300 ft. surveyor's tape marked in feet, and tenths and hundredths of a foot, and the student's own introductory geology textbook. The exercise employs a variety of pedagogical learning modalities, including traditional lecture-based, the use of Art/Drawing, use of Visualization, Collaborative learning, and Kinesthetic and Experiential learning. Initially the students are exposed to the concept of "Deep Time" in a short conventional introductory lecture; this is followed by a 'field day'. Prior to the field exercise, students work with their textbook selecting events is Earth History that they find interesting. Using the textbook and online resources they then draw figures that represent these events. The drawing exercise reinforces the learning by having students visualize (imprinting an image) of these geologic events. Once the students have produced their drawings, the outdoor field exercise follows. Working collaboratively, the students measure and lay out a scaled linear model representing 4.56 billion years of geologic time. They then organize and place their drawings in the proper sequence on the temporal model that they have created. Once all the drawings are in place they are able to visualize the expanse of time in Earth's history. Through comparing results from a pre-test to those from a post-test we can show the gains in student understanding of Deep Time, a concept that is central to many of our geologic understandings.

  6. Neural field theory of perceptual echo and implications for estimating brain connectivity

    NASA Astrophysics Data System (ADS)

    Robinson, P. A.; Pagès, J. C.; Gabay, N. C.; Babaie, T.; Mukta, K. N.

    2018-04-01

    Neural field theory is used to predict and analyze the phenomenon of perceptual echo in which random input stimuli at one location are correlated with electroencephalographic responses at other locations. It is shown that this echo correlation (EC) yields an estimate of the transfer function from the stimulated point to other locations. Modal analysis then explains the observed spatiotemporal structure of visually driven EC and the dominance of the alpha frequency; two eigenmodes of similar amplitude dominate the response, leading to temporal beating and a line of low correlation that runs from the crown of the head toward the ears. These effects result from mode splitting and symmetry breaking caused by interhemispheric coupling and cortical folding. It is shown how eigenmodes obtained from functional magnetic resonance imaging experiments can be combined with temporal dynamics from EC or other evoked responses to estimate the spatiotemporal transfer function between any two points and hence their effective connectivity.

  7. Cross-Modal Matching of Audio-Visual German and French Fluent Speech in Infancy

    PubMed Central

    Kubicek, Claudia; Hillairet de Boisferon, Anne; Dupierrix, Eve; Pascalis, Olivier; Lœvenbruck, Hélène; Gervain, Judit; Schwarzer, Gudrun

    2014-01-01

    The present study examined when and how the ability to cross-modally match audio-visual fluent speech develops in 4.5-, 6- and 12-month-old German-learning infants. In Experiment 1, 4.5- and 6-month-old infants’ audio-visual matching ability of native (German) and non-native (French) fluent speech was assessed by presenting auditory and visual speech information sequentially, that is, in the absence of temporal synchrony cues. The results showed that 4.5-month-old infants were capable of matching native as well as non-native audio and visual speech stimuli, whereas 6-month-olds perceived the audio-visual correspondence of native language stimuli only. This suggests that intersensory matching narrows for fluent speech between 4.5 and 6 months of age. In Experiment 2, auditory and visual speech information was presented simultaneously, therefore, providing temporal synchrony cues. Here, 6-month-olds were found to match native as well as non-native speech indicating facilitation of temporal synchrony cues on the intersensory perception of non-native fluent speech. Intriguingly, despite the fact that audio and visual stimuli cohered temporally, 12-month-olds matched the non-native language only. Results were discussed with regard to multisensory perceptual narrowing during the first year of life. PMID:24586651

  8. The effects of anterior arcuate and dorsomedial frontal cortex lesions on visually guided eye movements: 2. Paired and multiple targets.

    PubMed

    Schiller, P H; Chou, I

    2000-01-01

    This study examined the effects of anterior arcuate and dorsomedial frontal cortex lesions on the execution of saccadic eye movements made to paired and multiple targets in rhesus monkeys. Identical paired targets were presented with various temporal asynchronies to determine the temporal offset required to yield equal probability choices to either target. In the intact animal equal probability choices were typically obtained when the targets appeared simultaneously. After unilateral anterior arcuate lesions a major shift arose in the temporal offset required to obtain equal probability choices for paired targets that necessitated presenting the target in the hemifield contralateral to the lesion more than 100 ms prior to the target in the ipsilateral hemifield. This deficit was still pronounced 1 year after the lesion. Dorsomedial frontal cortex lesions produced much smaller but significant shifts in target selection that recovered more rapidly. Paired lesions produced deficits similar to those observed with anterior arcuate lesions alone. Major deficits were also observed on a multiple target temporal discrimination task after anterior arcuate but not after dorsomedial frontal cortex lesions. These results suggest that the frontal eye fields that reside in anterior bank of the arcuate sulcus play an important role in temporal processing and in target selection. Dorsomedial frontal cortex, that contains the medial eye fields, plays a much less important role in the execution of these tasks.

  9. Optic nerve compression as a late complication of a hydrogel explant with silicone encircling band.

    PubMed

    Crama, Niels; Kluijtmans, Leo; Klevering, B Jeroen

    2018-06-01

    To present a complication of compressive optic neuropathy caused by a swollen hydrogel explant and posteriorly displaced silicone encircling band. A 72-year-old female patient presented with progressive visual loss and a tilted optic disc. Her medical history included a retinal detachment in 1993 that was treated with a hydrogel explant under a solid silicone encircling band. Visual acuity had decreased from 6/10 to 6/20 and perimetry showed a scotoma in the temporal superior quadrant. On Magnetic Resonance Imaging (MRI), compression of the optic nerve by a displaced silicone encircling band inferior nasally in combination with a swollen episcleral hydrogel explant was observed. Surgical removal of the hydrogel explant and silicone encircling band was uneventful and resulted in improvement of visual acuity and visual field loss. This is the first report on compressive optic neuropathy caused by swelling of a hydrogel explant resulting in a dislocated silicone encircling band. The loss of visual function resolved upon removal of the explant and encircling band.

  10. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    PubMed

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  11. Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.

    PubMed

    Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris

    2016-05-04

    Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Top-down control of visual perception: attention in natural vision.

    PubMed

    Rolls, Edmund T

    2008-01-01

    Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.

  13. Role of temporal processing stages by inferior temporal neurons in facial recognition.

    PubMed

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition.

  14. Role of Temporal Processing Stages by Inferior Temporal Neurons in Facial Recognition

    PubMed Central

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Kawano, Kenji

    2011-01-01

    In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT) cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses. In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of face recognition. PMID:21734904

  15. The Audiovisual Temporal Binding Window Narrows in Early Childhood

    ERIC Educational Resources Information Center

    Lewkowicz, David J.; Flom, Ross

    2014-01-01

    Binding is key in multisensory perception. This study investigated the audio-visual (A-V) temporal binding window in 4-, 5-, and 6-year-old children (total N = 120). Children watched a person uttering a syllable whose auditory and visual components were either temporally synchronized or desynchronized by 366, 500, or 666 ms. They were asked…

  16. Improving exposure assessment in environmental epidemiology: Application of spatio-temporal visualization tools

    NASA Astrophysics Data System (ADS)

    Meliker, Jaymie R.; Slotnick, Melissa J.; Avruskin, Gillian A.; Kaufmann, Andrew; Jacquez, Geoffrey M.; Nriagu, Jerome O.

    2005-05-01

    A thorough assessment of human exposure to environmental agents should incorporate mobility patterns and temporal changes in human behaviors and concentrations of contaminants; yet the temporal dimension is often under-emphasized in exposure assessment endeavors, due in part to insufficient tools for visualizing and examining temporal datasets. Spatio-temporal visualization tools are valuable for integrating a temporal component, thus allowing for examination of continuous exposure histories in environmental epidemiologic investigations. An application of these tools to a bladder cancer case-control study in Michigan illustrates continuous exposure life-lines and maps that display smooth, continuous changes over time. Preliminary results suggest increased risk of bladder cancer from combined exposure to arsenic in drinking water (>25 μg/day) and heavy smoking (>30 cigarettes/day) in the 1970s and 1980s, and a possible cancer cluster around automotive, paint, and organic chemical industries in the early 1970s. These tools have broad application for examining spatially- and temporally-specific relationships between exposures to environmental risk factors and disease.

  17. Timing in audiovisual speech perception: A mini review and new psychophysical data.

    PubMed

    Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory

    2016-02-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.

  18. Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data

    PubMed Central

    Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory

    2015-01-01

    Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309

  19. Encoding model of temporal processing in human visual cortex.

    PubMed

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  20. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    PubMed Central

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  1. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    PubMed

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  2. Cross-Hemispheric Collaboration and Segregation Associated with Task Difficulty as Revealed by Structural and Functional Connectivity

    PubMed Central

    Cabeza, Roberto

    2015-01-01

    Although it is known that brain regions in one hemisphere may interact very closely with their corresponding contralateral regions (collaboration) or operate relatively independent of them (segregation), the specific brain regions (where) and conditions (how) associated with collaboration or segregation are largely unknown. We investigated these issues using a split field-matching task in which participants matched the meaning of words or the visual features of faces presented to the same (unilateral) or to different (bilateral) visual fields. Matching difficulty was manipulated by varying the semantic similarity of words or the visual similarity of faces. We assessed the white matter using the fractional anisotropy (FA) measure provided by diffusion tensor imaging (DTI) and cross-hemispheric communication in terms of fMRI-based connectivity between homotopic pairs of cortical regions. For both perceptual and semantic matching, bilateral trials became faster than unilateral trials as difficulty increased (bilateral processing advantage, BPA). The study yielded three novel findings. First, whereas FA in anterior corpus callosum (genu) correlated with word-matching BPA, FA in posterior corpus callosum (splenium-occipital) correlated with face-matching BPA. Second, as matching difficulty intensified, cross-hemispheric functional connectivity (CFC) increased in domain-general frontopolar cortex (for both word and face matching) but decreased in domain-specific ventral temporal lobe regions (temporal pole for word matching and fusiform gyrus for face matching). Last, a mediation analysis linking DTI and fMRI data showed that CFC mediated the effect of callosal FA on BPA. These findings clarify the mechanisms by which the hemispheres interact to perform complex cognitive tasks. PMID:26019335

  3. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  4. Long-term modifications of synaptic efficacy in the human inferior and middle temporal cortex

    NASA Technical Reports Server (NTRS)

    Chen, W. R.; Lee, S.; Kato, K.; Spencer, D. D.; Shepherd, G. M.; Williamson, A.

    1996-01-01

    The primate temporal cortex has been demonstrated to play an important role in visual memory and pattern recognition. It is of particular interest to investigate whether activity-dependent modification of synaptic efficacy, a presumptive mechanism for learning and memory, is present in this cortical region. Here we address this issue by examining the induction of synaptic plasticity in surgically resected human inferior and middle temporal cortex. The results show that synaptic strength in the human temporal cortex could undergo bidirectional modifications, depending on the pattern of conditioning stimulation. High frequency stimulation (100 or 40 Hz) in layer IV induced long-term potentiation (LTP) of both intracellular excitatory postsynaptic potentials and evoked field potentials in layers II/III. The LTP induced by 100 Hz tetanus was blocked by 50-100 microM DL-2-amino-5-phosphonovaleric acid, suggesting that N-methyl-D-aspartate receptors were responsible for its induction. Long-term depression (LTD) was elicited by prolonged low frequency stimulation (1 Hz, 15 min). It was reduced, but not completely blocked, by DL-2-amino-5-phosphonovaleric acid, implying that some other mechanisms in addition to N-methyl-DL-aspartate receptors were involved in LTD induction. LTD was input-specific, i.e., low frequency stimulation of one pathway produced LTD of synaptic transmission in that pathway only. Finally, the LTP and LTD could reverse each other, suggesting that they can act cooperatively to modify the functional state of cortical network. These results suggest that LTP and LTD are possible mechanisms for the visual memory and pattern recognition functions performed in the human temporal cortex.

  5. Novel Visualization of Large Health Related Data Sets

    DTIC Science & Technology

    2015-03-01

    Health Record Data: A Systematic Review B: McPeek Hinz E, Borland D, Shah H, West V, Hammond WE. Temporal Visualization of Diabetes Mellitus via Hemoglobin ...H, Borland D, McPeek Hinz E, West V, Hammond WE. Demonstration of Temporal Visualization of Diabetes Mellitus via Hemoglobin A1C Levels E... Hemoglobin A1c Levels and MultivariateVisualization of System-Wide National Health Service Data Using Radial Coordinates. (Copies in Appendix) 4.3

  6. Spatial and temporal aspects of chromatic adaptation and their functional significance for colour constancy.

    PubMed

    Werner, Annette

    2014-11-01

    Illumination in natural scenes changes at multiple temporal and spatial scales: slow changes in global illumination occur in the course of a day, and we encounter fast and localised illumination changes when visually exploring the non-uniform light field of three-dimensional scenes; in addition, very long-term chromatic variations may come from the environment, like for example seasonal changes. In this context, I consider the temporal and spatial properties of chromatic adaptation and discuss their functional significance for colour constancy in three-dimensional scenes. A process of fast spatial tuning in chromatic adaptation is proposed as a possible sensory mechanism for linking colour constancy to the spatial structure of a scene. The observed middlewavelength selectivity of this process is particularly suitable for adaptation to the mean chromaticity and the compensation of interreflections in natural scenes. Two types of sensory colour constancy are distinguished, based on the functional differences of their temporal and spatial scales: a slow type, operating at a global scale for the compensation of the ambient illumination; and a fast colour constancy, which is locally restricted and well suited to compensate region-specific variations in the light field of three dimensional scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Creative innovation with temporal lobe epilepsy and lobectomy.

    PubMed

    Ghacibeh, Georges A; Heilman, Kenneth M

    2013-01-15

    Some patients with left temporal degeneration develop visual artistic abilities. These new artistic abilities may be due to disinhibition of the visuo-spatially dominant right hemisphere. Many famous artists have had epilepsy and it is possible that some may have had left temporal seizures (LTS) and this left temporal dysfunction disinhibited their right hemisphere. Alternatively, unilateral epilepsy may alter intrahemispheric connectivity and right anterior temporal lobe seizures (RTS) may have increased these artists' right hemisphere mediated visual artistic creativity. To test the disinhibition versus enhanced connectivity hypotheses we studied 9 participants with RTS and 9 with left anterior temporal seizures (LTS) who underwent unilateral lobectomy for the treatment of medically refractory epilepsy. Creativity was tested using the Torrance Test of Creative Thinking (TTCT). There were no between group differences in either the verbal or figural scores of the TTCT, suggesting that unilateral anterior temporal ablation did not enhance visual artistic ability; however, for the RTS participants' figural creativity scores were significantly higher than verbal scores. Whereas these results fail to support the left temporal lobe disinhibition postulate of enhanced figural creativity, the finding that the patients with RTS had better figural than verbal creativity suggests that their recurrent right hemispheric seizures lead to changes in their right hemispheric networks that facilitated visual creativity. To obtain converging evidence, studies on RTS participants who have not undergone lobectomy will need to be performed. Published by Elsevier B.V.

  8. Cecocentral scotoma as the initial manifestation of subacute bacterial endocarditis

    PubMed Central

    Strauss, Danielle Savitsky; Baharestani, Samuel; Nemiroff, Julia; Amesur, Kiran; Howard, David

    2011-01-01

    Introduction: We report a case of a 67-year-old male who presented with a cecocentral scotoma caused by a septic embolus from subacute bacterial endocarditis (SBE). Methods: A 67-year-old man presented with sudden, painless decreased vision in the left eye. A dilated fundoscopic exam, Humphrey visual field test, transthoracic echocardiogram, abdominal computed tomography (CT), and blood cultures were all performed. Results: A dilated fundoscopic exam revealed temporal segmental optic disc pallor on the left, and Humphrey visual field testing demonstrated a dense left cecocentral scotoma. When the patient developed fever (103. 9°F) and palpitations, transthoracic echocardiogram revealed valvular vegetations, and contrast CT of the abdomen revealed an abscess in the dome of the liver likely due to an infectious thrombus. Blood cultures grew viridians group streptococci in three separate peripheral collections. Conclusion: This case illustrates that a sudden cecocentral scotoma may be the initial manifestation of SBE. PMID:21468335

  9. Spatio-temporal dependencies between hospital beds, physicians and health expenditure using visual variables and data classification in statistical table

    NASA Astrophysics Data System (ADS)

    Medyńska-Gulij, Beata; Cybulski, Paweł

    2016-06-01

    This paper analyses the use of table visual variables of statistical data of hospital beds as an important tool for revealing spatio-temporal dependencies. It is argued that some of conclusions from the data about public health and public expenditure on health have a spatio-temporal reference. Different from previous studies, this article adopts combination of cartographic pragmatics and spatial visualization with previous conclusions made in public health literature. While the significant conclusions about health care and economic factors has been highlighted in research papers, this article is the first to apply visual analysis to statistical table together with maps which is called previsualisation.

  10. Memory reorganization following anterior temporal lobe resection: a longitudinal functional MRI study

    PubMed Central

    Bonelli, Silvia B.; Thompson, Pamela J.; Yogarajah, Mahinda; Powell, Robert H. W.; Samson, Rebecca S.; McEvoy, Andrew W.; Symms, Mark R.; Koepp, Matthias J.

    2013-01-01

    Anterior temporal lobe resection controls seizures in 50–60% of patients with intractable temporal lobe epilepsy but may impair memory function, typically verbal memory following left, and visual memory following right anterior temporal lobe resection. Functional reorganization can occur within the ipsilateral and contralateral hemispheres. We investigated the reorganization of memory function in patients with temporal lobe epilepsy before and after left or right anterior temporal lobe resection and the efficiency of postoperative memory networks. We studied 46 patients with unilateral medial temporal lobe epilepsy (25/26 left hippocampal sclerosis, 16/20 right hippocampal sclerosis) before and after anterior temporal lobe resection on a 3 T General Electric magnetic resonance imaging scanner. All subjects had neuropsychological testing and performed a functional magnetic resonance imaging memory encoding paradigm for words, pictures and faces, testing verbal and visual memory in a single scanning session, preoperatively and again 4 months after surgery. Event-related analysis revealed that patients with left temporal lobe epilepsy had greater activation in the left posterior medial temporal lobe when successfully encoding words postoperatively than preoperatively. Greater pre- than postoperative activation in the ipsilateral posterior medial temporal lobe for encoding words correlated with better verbal memory outcome after left anterior temporal lobe resection. In contrast, greater postoperative than preoperative activation in the ipsilateral posterior medial temporal lobe correlated with worse postoperative verbal memory performance. These postoperative effects were not observed for visual memory function after right anterior temporal lobe resection. Our findings provide evidence for effective preoperative reorganization of verbal memory function to the ipsilateral posterior medial temporal lobe due to the underlying disease, suggesting that it is the capacity of the posterior remnant of the ipsilateral hippocampus rather than the functional reserve of the contralateral hippocampus that is important for maintaining verbal memory function after anterior temporal lobe resection. Early postoperative reorganization to ipsilateral posterior or contralateral medial temporal lobe structures does not underpin better performance. Additionally our results suggest that visual memory function in right temporal lobe epilepsy is affected differently by right anterior temporal lobe resection than verbal memory in left temporal lobe epilepsy. PMID:23715092

  11. Right hemispheric dominance of visual phenomena evoked by intracerebral stimulation of the human visual cortex.

    PubMed

    Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis

    2014-07-01

    Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.

  12. Eye Velocity Gain Fields in MSTd During Optokinetic Stimulation

    PubMed Central

    Brostek, Lukas; Büttner, Ulrich; Mustari, Michael J.; Glasauer, Stefan

    2015-01-01

    Lesion studies argue for an involvement of cortical area dorsal medial superior temporal area (MSTd) in the control of optokinetic response (OKR) eye movements to planar visual stimulation. Neural recordings during OKR suggested that MSTd neurons directly encode stimulus velocity. On the other hand, studies using radial visual flow together with voluntary smooth pursuit eye movements showed that visual motion responses were modulated by eye movement-related signals. Here, we investigated neural responses in MSTd during continuous optokinetic stimulation using an information-theoretic approach for characterizing neural tuning with high resolution. We show that the majority of MSTd neurons exhibit gain-field-like tuning functions rather than directly encoding one variable. Neural responses showed a large diversity of tuning to combinations of retinal and extraretinal input. Eye velocity-related activity was observed prior to the actual eye movements, reflecting an efference copy. The observed tuning functions resembled those emerging in a network model trained to perform summation of 2 population-coded signals. Together, our findings support the hypothesis that MSTd implements the visuomotor transformation from retinal to head-centered stimulus velocity signals for the control of OKR. PMID:24557636

  13. Attractive faces temporally modulate visual attention

    PubMed Central

    Nakamura, Koyo; Kawabata, Hideaki

    2014-01-01

    Facial attractiveness is an important biological and social signal on social interaction. Recent research has demonstrated that an attractive face captures greater spatial attention than an unattractive face does. Little is known, however, about the temporal characteristics of visual attention for facial attractiveness. In this study, we investigated the temporal modulation of visual attention induced by facial attractiveness by using a rapid serial visual presentation. Fourteen male faces and two female faces were successively presented for 160 ms, respectively, and participants were asked to identify two female faces embedded among a series of multiple male distractor faces. Identification of a second female target (T2) was impaired when a first target (T1) was attractive compared to neutral or unattractive faces, at 320 ms stimulus onset asynchrony (SOA); identification was improved when T1 was attractive compared to unattractive faces at 640 ms SOA. These findings suggest that the spontaneous appraisal of facial attractiveness modulates temporal attention. PMID:24994994

  14. Learning and disrupting invariance in visual recognition with a temporal association rule

    PubMed Central

    Isik, Leyla; Leibo, Joel Z.; Poggio, Tomaso

    2012-01-01

    Learning by temporal association rules such as Foldiak's trace rule is an attractive hypothesis that explains the development of invariance in visual recognition. Consistent with these rules, several recent experiments have shown that invariance can be broken at both the psychophysical and single cell levels. We show (1) that temporal association learning provides appropriate invariance in models of object recognition inspired by the visual cortex, (2) that we can replicate the “invariance disruption” experiments using these models with a temporal association learning rule to develop and maintain invariance, and (3) that despite dramatic single cell effects, a population of cells is very robust to these disruptions. We argue that these models account for the stability of perceptual invariance despite the underlying plasticity of the system, the variability of the visual world and expected noise in the biological mechanisms. PMID:22754523

  15. Real-Time Earthquake Monitoring with Spatio-Temporal Fields

    NASA Astrophysics Data System (ADS)

    Whittier, J. C.; Nittel, S.; Subasinghe, I.

    2017-10-01

    With live streaming sensors and sensor networks, increasingly large numbers of individual sensors are deployed in physical space. Sensor data streams are a fundamentally novel mechanism to deliver observations to information systems. They enable us to represent spatio-temporal continuous phenomena such as radiation accidents, toxic plumes, or earthquakes almost as instantaneously as they happen in the real world. Sensor data streams discretely sample an earthquake, while the earthquake is continuous over space and time. Programmers attempting to integrate many streams to analyze earthquake activity and scope need to write code to integrate potentially very large sets of asynchronously sampled, concurrent streams in tedious application code. In previous work, we proposed the field stream data model (Liang et al., 2016) for data stream engines. Abstracting the stream of an individual sensor as a temporal field, the field represents the Earth's movement at the sensor position as continuous. This simplifies analysis across many sensors significantly. In this paper, we undertake a feasibility study of using the field stream model and the open source Data Stream Engine (DSE) Apache Spark(Apache Spark, 2017) to implement a real-time earthquake event detection with a subset of the 250 GPS sensor data streams of the Southern California Integrated GPS Network (SCIGN). The field-based real-time stream queries compute maximum displacement values over the latest query window of each stream, and related spatially neighboring streams to identify earthquake events and their extent. Further, we correlated the detected events with an USGS earthquake event feed. The query results are visualized in real-time.

  16. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    PubMed Central

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  18. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    PubMed

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  19. Coherent Amplification of Ultrafast Molecular Dynamics in an Optical Oscillator

    NASA Astrophysics Data System (ADS)

    Aharonovich, Igal; Pe'er, Avi

    2016-02-01

    Optical oscillators present a powerful optimization mechanism. The inherent competition for the gain resources between possible modes of oscillation entails the prevalence of the most efficient single mode. We harness this "ultrafast" coherent feedback to optimize an optical field in time, and show that, when an optical oscillator based on a molecular gain medium is synchronously pumped by ultrashort pulses, a temporally coherent multimode field can develop that optimally dumps a general, dynamically evolving vibrational wave packet, into a single vibrational target state. Measuring the emitted field opens a new window to visualization and control of fast molecular dynamics. The realization of such a coherent oscillator with hot alkali dimers appears within experimental reach.

  20. Quantitative Evaluation of Medial Temporal Lobe Morphology in Children with Febrile Status Epilepticus: Results of the FEBSTAT Study.

    PubMed

    McClelland, A C; Gomes, W A; Shinnar, S; Hesdorffer, D C; Bagiella, E; Lewis, D V; Bello, J A; Chan, S; MacFall, J; Chen, M; Pellock, J M; Nordli, D R; Frank, L M; Moshé, S L; Shinnar, R C; Sun, S

    2016-12-01

    The pathogenesis of febrile status epilepticus is poorly understood, but prior studies have suggested an association with temporal lobe abnormalities, including hippocampal malrotation. We used a quantitative morphometric method to assess the association between temporal lobe morphology and febrile status epilepticus. Brain MR imaging was performed in children presenting with febrile status epilepticus and control subjects as part of the Consequences of Prolonged Febrile Seizures in Childhood study. Medial temporal lobe morphologic parameters were measured manually, including the distance of the hippocampus from the midline, hippocampal height:width ratio, hippocampal angle, collateral sulcus angle, and width of the temporal horn. Temporal lobe morphologic parameters were correlated with the presence of visual hippocampal malrotation; the strongest association was with left temporal horn width (P < .001; adjusted OR, 10.59). Multiple morphologic parameters correlated with febrile status epilepticus, encompassing both the right and left sides. This association was statistically strongest in the right temporal lobe, whereas hippocampal malrotation was almost exclusively left-sided in this cohort. The association between temporal lobe measurements and febrile status epilepticus persisted when the analysis was restricted to cases with visually normal imaging findings without hippocampal malrotation or other visually apparent abnormalities. Several component morphologic features of hippocampal malrotation are independently associated with febrile status epilepticus, even when complete hippocampal malrotation is absent. Unexpectedly, this association predominantly involves the right temporal lobe. These findings suggest that a spectrum of bilateral temporal lobe anomalies are associated with febrile status epilepticus in children. Hippocampal malrotation may represent a visually apparent subset of this spectrum. © 2016 by American Journal of Neuroradiology.

  1. Development of Four Dimensional Human Model that Enables Deformation of Skin, Organs and Blood Vessel System During Body Movement - Visualizing Movements of the Musculoskeletal System.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hashizume, Makoto

    2016-01-01

    We constructed a four dimensional human model that is able to visualize the structure of a whole human body, including the inner structures, in real-time to allow us to analyze human dynamic changes in the temporal, spatial and quantitative domains. To verify whether our model was generating changes according to real human body dynamics, we measured a participant's skin expansion and compared it to that of the model conducted under the same body movement. We also made a contribution to the field of orthopedics, as we were able to devise a display method that enables the observer to more easily observe the changes made in the complex skeletal muscle system during body movements, which in the past were difficult to visualize.

  2. Visualizing spatiotemporal pulse propagation: first-order spatiotemporal couplings in laser pulses.

    PubMed

    Rhodes, Michelle; Guang, Zhe; Pease, Jerrold; Trebino, Rick

    2017-04-10

    Even though a general theory of first-order spatiotemporal couplings exists in the literature, it is often difficult to visualize how these distortions affect laser pulses. In particular, it is difficult to show the spatiotemporal phase of pulses in a meaningful way. Here, we propose a general solution to plotting the electric fields of pulses in three-dimensional space that intuitively shows the effects of spatiotemporal phases. The temporal phase information is color-coded using spectrograms and color response functions, and the beam is propagated to show the spatial phase evolution. Using this plotting technique, we generate two- and three-dimensional images and movies that show the effects of spatiotemporal couplings.

  3. Visualizing spatiotemporal pulse propagation: first-order spatiotemporal couplings in laser pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhodes, Michelle; Guang, Zhe; Pease, Jerrold

    2017-04-06

    Even though a general theory of first-order spatiotemporal couplings exists in the literature, it is often difficult to visualize how these distortions affect laser pulses. In particular, it is difficult to show the spatiotemporal phase of pulses in a meaningful way. We propose a general solution to plotting the electric fields of pulses in three-dimensional space that intuitively shows the effects of spatiotemporal phases. The temporal phase information is color-coded using spectrograms and color response functions, and the beam is propagated to show the spatial phase evolution. In using this plotting technique, we generate two- and three-dimensional images and moviesmore » that show the effects of spatiotemporal couplings.« less

  4. Temporal dynamics of the knowledge-mediated visual disambiguation process in humans: a magnetoencephalography study.

    PubMed

    Urakawa, Tomokazu; Ogata, Katsuya; Kimura, Takahiro; Kume, Yuko; Tobimatsu, Shozo

    2015-01-01

    Disambiguation of a noisy visual scene with prior knowledge is an indispensable task of the visual system. To adequately adapt to a dynamically changing visual environment full of noisy visual scenes, the implementation of knowledge-mediated disambiguation in the brain is imperative and essential for proceeding as fast as possible under the limited capacity of visual image processing. However, the temporal profile of the disambiguation process has not yet been fully elucidated in the brain. The present study attempted to determine how quickly knowledge-mediated disambiguation began to proceed along visual areas after the onset of a two-tone ambiguous image using magnetoencephalography with high temporal resolution. Using the predictive coding framework, we focused on activity reduction for the two-tone ambiguous image as an index of the implementation of disambiguation. Source analysis revealed that a significant activity reduction was observed in the lateral occipital area at approximately 120 ms after the onset of the ambiguous image, but not in preceding activity (about 115 ms) in the cuneus when participants perceptually disambiguated the ambiguous image with prior knowledge. These results suggested that knowledge-mediated disambiguation may be implemented as early as approximately 120 ms following an ambiguous visual scene, at least in the lateral occipital area, and provided an insight into the temporal profile of the disambiguation process of a noisy visual scene with prior knowledge. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. Sector retinitis pigmentosa.

    PubMed

    Van Woerkom, Craig; Ferrucci, Steven

    2005-05-01

    Retinitis pigmentosa (RP) is one of the most common hereditary retinal dystrophies and causes of visual impairment affecting all age groups. The reported incidence varies, but is considered to be between 1 in 3,000 to 1 in 7,000. Sector retinitis pigmentosa is an atypical form of RP that is characterized by regionalized areas of bone spicule pigmentation, usually in the inferior quadrants of the retina. A 57-year-old Hispanic man with a history of previously diagnosed retinitis pigmentosa came to the clinic with a longstanding symptom of decreased vision at night. Bone spicule pigmentation was found in the nasal and inferior quadrants in each eye. He demonstrated superior and temporal visual-field loss corresponding to the areas of the affected retina. Clinical measurements of visual-field loss, best-corrected visual acuity, and ophthalmoscopic appearance have remained stable during the five years the patient has been followed. Sector retinitis pigmentosa is an atypical form of RP that is characterized by bilateral pigmentary retinopathy, usually isolated to the inferior quadrants. The remainder of the retina appears clinically normal, although studies have found functional abnormalities in these areas as well. Sector RP is generally considered a stationary to slowly progressive disease, with subnormal electro-retinogram findings and visual-field defects corresponding to the involved retinal sectors. Management of RP is very difficult because there are no proven methods of treatment. Studies have shown 15,000 IU of vitamin A palmitate per day may slow the progression, though this result is controversial. Low vision rehabilitation, long wavelength pass filters, and pedigree counseling remain the mainstay of management.

  6. Is vision function related to physical functional ability in older adults?

    PubMed

    West, Catherine G; Gildengorin, Ginny; Haegerstrom-Portnoy, Gunilla; Schneck, Marilyn E; Lott, Lori; Brabyn, John A

    2002-01-01

    To assess the relationship between a broad range of vision functions and measures of physical performance in older adults. Cross-sectional study. Population-based cohort of community-dwelling older adults, subset of an on-going longitudinal study. Seven hundred eighty-two adults aged 55 and older (65% of living eligible subjects) had subjective health measures and objective physical performance evaluated in 1989/91 and again in 1993/95 and a battery of vision functions tested in 1993/95. Comprehensive battery of vision tests (visual acuity, contrast sensitivity, effects of illumination level, contrast and glare on acuity, visual fields with and without attentional load, color vision, temporal sensitivity, and the impact of dimming light on walking ability) and physical function measures (self-reported mobility limitations and observed measures of walking, rising from a chair and tandem balance). The failure rate for all vision functions and physical performance measures increased exponentially with age. Standard high-contrast visual acuity and standard visual fields showed the lowest failure rates. Nonstandard vision tests showed much higher failure rates. Poor performance on many individual vision functions was significantly associated with particular individual measures of physical performance. Using constructed combination vision variables, significant associations were found between spatial vision, field integrity, binocularity and/or adaptation, and each of the functional outcomes. Vision functions other than standard visual acuity may affect day-to-day functioning of older adults. Additional studies of these other aspects of vision and how they can be treated or rehabilitated are needed to determine whether these aspects play a role in strategies for reducing disability in older adults.

  7. Characteristics of dynamic processing in the visual field of patients with age-related maculopathy

    PubMed Central

    Eisenbarth, Werner; MacKeben, Manfred; Poggel, Dorothe A.

    2007-01-01

    Purpose To investigate the characteristics of dynamic processing in the visual field of patients with age-related maculopathy (ARM) by measuring motion sensitivity, double-pulse resolution (DPR), and critical flicker fusion. Methods Fourteen subjects with ARM (18 eyes), 14 age-matched controls (19 eyes), and 7 young controls (8 eyes) served as subjects. Motion contrast thresholds were determined by a four-alternative forced-choice (4 afc) staircase procedure with a modification by Kernbach for presenting a plaid (size = 3.8°) moving within a stationary spatial and temporal Gaussian envelope in one of four directions. Measurements were performed on the horizontal meridian at 10°, 20°, 30°, 40°, and 60° eccentricity. DPR was defined as the minimal temporal gap detectable by the subject using a 9-fold interleaved adaptive procedure, with stimuli positioned on concentric rings at 5°, 10°, and 20° eccentricity on the principal and oblique meridians. Critical flicker fusion thresholds (CFF) and the Lanthony D-15 color vision test were applied foveally, and the subjects were free to use their fovea or whatever retinal area they needed to use instead, due to their retinal lesions caused by ARM. All measurements were performed under photopic conditions. Results Motion contrast sensitivity in subjects with ARM was pronouncedly reduced (0.23–0.66 log units, p < 0.01), not only in the macula but in a region up to 20° eccentricity. In the two control groups, motion contrast sensitivity systematically declined with retinal eccentricity (0.009–0.032 log units/degree) and with age (0.01 log units/year). Double-pulse thresholds in healthy subjects were approximately constant in the central visual field and increased outside a radius of 10° (1.73 ms/degree). DPR thresholds were elevated in subjects with ARM (by 23–32 ms, p < 0.01) up to 20° eccentricity, and their foveal CFFs were increased by 5.5 Hz or 14% (p < 0.01) as compared with age-matched controls. Conclusions Dynamic processing properties in subjects with ARM are severely impaired in the central visual field up to 20° eccentricity, which is clearly beyond the borders of the macula. PMID:17882447

  8. Orientation-Cue Invariant Population Responses to Contrast-Modulated and Phase-Reversed Contour Stimuli in Macaque V1 and V2

    PubMed Central

    An, Xu; Gong, Hongliang; Yin, Jiapeng; Wang, Xiaochun; Pan, Yanxia; Zhang, Xian; Lu, Yiliang; Yang, Yupeng; Toth, Zoltan; Schiessl, Ingo; McLoughlin, Niall; Wang, Wei

    2014-01-01

    Visual scenes can be readily decomposed into a variety of oriented components, the processing of which is vital for object segregation and recognition. In primate V1 and V2, most neurons have small spatio-temporal receptive fields responding selectively to oriented luminance contours (first order), while only a subgroup of neurons signal non-luminance defined contours (second order). So how is the orientation of second-order contours represented at the population level in macaque V1 and V2? Here we compared the population responses in macaque V1 and V2 to two types of second-order contour stimuli generated either by modulation of contrast or phase reversal with those to first-order contour stimuli. Using intrinsic signal optical imaging, we found that the orientation of second-order contour stimuli was represented invariantly in the orientation columns of both macaque V1 and V2. A physiologically constrained spatio-temporal energy model of V1 and V2 neuronal populations could reproduce all the recorded population responses. These findings suggest that, at the population level, the primate early visual system processes the orientation of second-order contours initially through a linear spatio-temporal filter mechanism. Our results of population responses to different second-order contour stimuli support the idea that the orientation maps in primate V1 and V2 can be described as a spatial-temporal energy map. PMID:25188576

  9. Brief Report: Which Came First? Exploring Crossmodal Temporal Order Judgements and Their Relationship with Sensory Reactivity in Autism and Neurotypicals

    ERIC Educational Resources Information Center

    Poole, Daniel; Gowen, Emma; Warren, Paul A.; Poliakoff, Ellen

    2017-01-01

    Previous studies have indicated that visual-auditory temporal acuity is reduced in children with autism spectrum conditions (ASC) in comparison to neurotypicals. In the present study we investigated temporal acuity for all possible bimodal pairings of visual, tactile and auditory information in adults with ASC (n = 18) and a matched control group…

  10. Merging Disparate Data Sources Into a Paleoanthropological Geodatabase for Research, Education, and Conservation in the Greater Hadar Region (Afar, Ethiopia)

    NASA Astrophysics Data System (ADS)

    Campisano, C. J.; Dimaggio, E. N.; Arrowsmith, J. R.; Kimbel, W. H.; Reed, K. E.; Robinson, S. E.; Schoville, B. J.

    2008-12-01

    Understanding the geographic, temporal, and environmental contexts of human evolution requires the ability to compare wide-ranging datasets collected from multiple research disciplines. Paleoanthropological field- research projects are notoriously independent administratively even in regions of high transdisciplinary importance. As a result, valuable opportunities for the integration of new and archival datasets spanning diverse archaeological assemblages, paleontological localities, and stratigraphic sequences are often neglected, which limits the range of research questions that can be addressed. Using geoinformatic tools we integrate spatial, temporal, and semantically disparate paleoanthropological and geological datasets from the Hadar sedimentary basin of the Afar Rift, Ethiopia. Applying newly integrated data to investigations of fossil- rich sediments will provide the geospatial framework critical for addressing fundamental questions concerning hominins and their paleoenvironmental context. We present a preliminary cyberinfrastructure for data management that will allow scientists, students, and interested citizens to interact with, integrate, and visualize data from the Afar region. Examples of our initial integration efforts include generating a regional high-resolution satellite imagery base layer for georeferencing, standardizing and compiling multiple project datasets and digitizing paper maps. We also demonstrate how the robust datasets generated from our work are being incorporated into a new, digital module for Arizona State University's Hadar Paleoanthropology Field School - modernizing field data collection methods, on-the-fly data visualization and query, and subsequent analysis and interpretation. Armed with a fully fused database tethered to high-resolution satellite imagery, we can more accurately reconstruct spatial and temporal paleoenvironmental conditions and efficiently address key scientific questions, such as those regarding the relative importance of internal and external ecological, climatological, and tectonic forcings on evolutionary change in the fossil record. In close association with colleagues working in neighboring project areas, this work advances multidisciplinary and collaborative research, training, and long-range antiquities conservation in the Hadar region.

  11. Hierarchical Spatio-temporal Visual Analysis of Cluster Evolution in Electrocorticography Data

    DOE PAGES

    Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward; ...

    2016-10-02

    Here, we present ECoG ClusterFlow, a novel interactive visual analysis tool for the exploration of high-resolution Electrocorticography (ECoG) data. Our system detects and visualizes dynamic high-level structures, such as communities, using the time-varying spatial connectivity network derived from the high-resolution ECoG data. ECoG ClusterFlow provides a multi-scale visualization of the spatio-temporal patterns underlying the time-varying communities using two views: 1) an overview summarizing the evolution of clusters over time and 2) a hierarchical glyph-based technique that uses data aggregation and small multiples techniques to visualize the propagation of clusters in their spatial domain. ECoG ClusterFlow makes it possible 1) tomore » compare the spatio-temporal evolution patterns across various time intervals, 2) to compare the temporal information at varying levels of granularity, and 3) to investigate the evolution of spatial patterns without occluding the spatial context information. Lastly, we present case studies done in collaboration with neuroscientists on our team for both simulated and real epileptic seizure data aimed at evaluating the effectiveness of our approach.« less

  12. Learning of spatio-temporal codes in a coupled oscillator system.

    PubMed

    Orosz, Gábor; Ashwin, Peter; Townley, Stuart

    2009-07-01

    In this paper, we consider a learning strategy that allows one to transmit information between two coupled phase oscillator systems (called teaching and learning systems) via frequency adaptation. The dynamics of these systems can be modeled with reference to a number of partially synchronized cluster states and transitions between them. Forcing the teaching system by steady but spatially nonhomogeneous inputs produces cyclic sequences of transitions between the cluster states, that is, information about inputs is encoded via a "winnerless competition" process into spatio-temporal codes. The large variety of codes can be learned by the learning system that adapts its frequencies to those of the teaching system. We visualize the dynamics using "weighted order parameters (WOPs)" that are analogous to "local field potentials" in neural systems. Since spatio-temporal coding is a mechanism that appears in olfactory systems, the developed learning rules may help to extract information from these neural ensembles.

  13. Optical gating and streaking of free electrons with sub-optical cycle precision

    PubMed Central

    Kozák, M.; McNeur, J.; Leedle, K. J.; Deng, H.; Schönenberger, N.; Ruehl, A.; Hartl, I.; Harris, J. S.; Byer, R. L.; Hommelhoff, P.

    2017-01-01

    The temporal resolution of ultrafast electron diffraction and microscopy experiments is currently limited by the available experimental techniques for the generation and characterization of electron bunches with single femtosecond or attosecond durations. Here, we present proof of principle experiments of an optical gating concept for free electrons via direct time-domain visualization of the sub-optical cycle energy and transverse momentum structure imprinted on the electron beam. We demonstrate a temporal resolution of 1.2±0.3 fs. The scheme is based on the synchronous interaction between electrons and the near-field mode of a dielectric nano-grating excited by a femtosecond laser pulse with an optical period duration of 6.5 fs. The sub-optical cycle resolution demonstrated here is promising for use in laser-driven streak cameras for attosecond temporal characterization of bunched particle beams as well as time-resolved experiments with free-electron beams. PMID:28120930

  14. Frequency modulation of neural oscillations according to visual task demands.

    PubMed

    Wutz, Andreas; Melcher, David; Samaha, Jason

    2018-02-06

    Temporal integration in visual perception is thought to occur within cycles of occipital alpha-band (8-12 Hz) oscillations. Successive stimuli may be integrated when they fall within the same alpha cycle and segregated for different alpha cycles. Consequently, the speed of alpha oscillations correlates with the temporal resolution of perception, such that lower alpha frequencies provide longer time windows for perceptual integration and higher alpha frequencies correspond to faster sampling and segregation. Can the brain's rhythmic activity be dynamically controlled to adjust its processing speed according to different visual task demands? We recorded magnetoencephalography (MEG) while participants switched between task instructions for temporal integration and segregation, holding stimuli and task difficulty constant. We found that the peak frequency of alpha oscillations decreased when visual task demands required temporal integration compared with segregation. Alpha frequency was strategically modulated immediately before and during stimulus processing, suggesting a preparatory top-down source of modulation. Its neural generators were located in occipital and inferotemporal cortex. The frequency modulation was specific to alpha oscillations and did not occur in the delta (1-3 Hz), theta (3-7 Hz), beta (15-30 Hz), or gamma (30-50 Hz) frequency range. These results show that alpha frequency is under top-down control to increase or decrease the temporal resolution of visual perception.

  15. Retinal lesions induce fast intrinsic cortical plasticity in adult mouse visual system.

    PubMed

    Smolders, Katrien; Vreysen, Samme; Laramée, Marie-Eve; Cuyvers, Annemie; Hu, Tjing-Tjing; Van Brussel, Leen; Eysel, Ulf T; Nys, Julie; Arckens, Lutgarde

    2016-09-01

    Neuronal activity plays an important role in the development and structural-functional maintenance of the brain as well as in its life-long plastic response to changes in sensory stimulation. We characterized the impact of unilateral 15° laser lesions in the temporal lower visual field of the retina, on visually driven neuronal activity in the afferent visual pathway of adult mice using in situ hybridization for the activity reporter gene zif268. In the first days post-lesion, we detected a discrete zone of reduced zif268 expression in the contralateral hemisphere, spanning the border between the monocular segment of the primary visual cortex (V1) with extrastriate visual area V2M. We could not detect a clear lesion projection zone (LPZ) in areas lateral to V1 whereas medial to V2M, agranular and granular retrosplenial cortex showed decreased zif268 levels over their full extent. All affected areas displayed a return to normal zif268 levels, and this was faster in higher order visual areas than in V1. The lesion did, however, induce a permanent LPZ in the retinorecipient layers of the superior colliculus. We identified a retinotopy-based intrinsic capacity of adult mouse visual cortex to recover from restricted vision loss, with recovery speed reflecting the areal cortical magnification factor. Our observations predict incomplete visual field representations for areas lateral to V1 vs. lack of retinotopic organization for areas medial to V2M. The validation of this mouse model paves the way for future interrogations of cortical region- and cell-type-specific contributions to functional recovery, up to microcircuit level. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Brain activity related to working memory for temporal order and object information.

    PubMed

    Roberts, Brooke M; Libby, Laura A; Inhoff, Marika C; Ranganath, Charan

    2017-06-08

    Maintaining items in an appropriate sequence is important for many daily activities; however, remarkably little is known about the neural basis of human temporal working memory. Prior work suggests that the prefrontal cortex (PFC) and medial temporal lobe (MTL), including the hippocampus, play a role in representing information about temporal order. The involvement of these areas in successful temporal working memory, however, is less clear. Additionally, it is unknown whether regions in the PFC and MTL support temporal working memory across different timescales, or at coarse or fine levels of temporal detail. To address these questions, participants were scanned while completing 3 working memory task conditions (Group, Position and Item) that were matched in terms of difficulty and the number of items to be actively maintained. Group and Position trials probed temporal working memory processes, requiring the maintenance of hierarchically organized coarse and fine temporal information, respectively. To isolate activation related to temporal working memory, Group and Position trials were contrasted against Item trials, which required detailed working memory maintenance of visual objects. Results revealed that working memory encoding and maintenance of temporal information relative to visual information was associated with increased activation in dorsolateral PFC (DLPFC), and perirhinal cortex (PRC). In contrast, maintenance of visual details relative to temporal information was characterized by greater activation of parahippocampal cortex (PHC), medial and anterior PFC, and retrosplenial cortex. In the hippocampus, a dissociation along the longitudinal axis was observed such that the anterior hippocampus was more active for working memory encoding and maintenance of visual detail information relative to temporal information, whereas the posterior hippocampus displayed the opposite effect. Posterior parietal cortex was the only region to show sensitivity to temporal working memory across timescales, and was particularly involved in the encoding and maintenance of fine temporal information relative to maintenance of temporal information at more coarse timescales. Collectively, these results highlight the involvement of PFC and MTL in temporal working memory processes, and suggest a dissociation in the type of working memory information represented along the longitudinal axis of the hippocampus. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Circadian timed episodic-like memory - a bee knows what to do when, and also where.

    PubMed

    Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu

    2007-10-01

    This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.

  18. Time perception of visual motion is tuned by the motor representation of human actions

    PubMed Central

    Gavazzi, Gioele; Bisio, Ambra; Pozzo, Thierry

    2013-01-01

    Several studies have shown that the observation of a rapidly moving stimulus dilates our perception of time. However, this effect appears to be at odds with the fact that our interactions both with environment and with each other are temporally accurate. This work exploits this paradox to investigate whether the temporal accuracy of visual motion uses motor representations of actions. To this aim, the stimuli were a dot moving with kinematics belonging or not to the human motor repertoire and displayed at different velocities. Participants had to replicate its duration with two tasks differing in the underlying motor plan. Results show that independently of the task's motor plan, the temporal accuracy and precision depend on the correspondence between the stimulus' kinematics and the observer's motor competencies. Our data suggest that the temporal mechanism of visual motion exploits a temporal visuomotor representation tuned by the motor knowledge of human actions. PMID:23378903

  19. Endogenous Sequential Cortical Activity Evoked by Visual Stimuli

    PubMed Central

    Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael

    2015-01-01

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915

  20. Age-Related Changes in Temporal Allocation of Visual Attention: Evidence from the Rapid Serial Visual Presentation (RSVP) Paradigm

    ERIC Educational Resources Information Center

    Berger, Carole; Valdois, Sylviane; Lallier, Marie; Donnadieu, Sophie

    2015-01-01

    The present study explored the temporal allocation of attention in groups of 8-year-old children, 10-year-old children, and adults performing a rapid serial visual presentation task. In a dual-condition task, participants had to detect a briefly presented target (T2) after identifying an initial target (T1) embedded in a random series of…

  1. Nonretinotopic visual processing in the brain.

    PubMed

    Melcher, David; Morrone, Maria Concetta

    2015-01-01

    A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.

  2. Spectral and Temporal Processing in Rat Posterior Auditory Cortex

    PubMed Central

    Pandya, Pritesh K.; Rathbun, Daniel L.; Moucha, Raluca; Engineer, Navzer D.; Kilgard, Michael P.

    2009-01-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex. PMID:17615251

  3. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  4. Topographic contribution of early visual cortex to short-term memory consolidation: a transcranial magnetic stimulation study.

    PubMed

    van de Ven, Vincent; Jacobs, Christianne; Sack, Alexander T

    2012-01-04

    The neural correlates for retention of visual information in visual short-term memory are considered separate from those of sensory encoding. However, recent findings suggest that sensory areas may play a role also in short-term memory. We investigated the functional relevance, spatial specificity, and temporal characteristics of human early visual cortex in the consolidation of capacity-limited topographic visual memory using transcranial magnetic stimulation (TMS). Topographically specific TMS pulses were delivered over lateralized occipital cortex at 100, 200, or 400 ms into the retention phase of a modified change detection task with low or high memory loads. For the high but not the low memory load, we found decreased memory performance for memory trials in the visual field contralateral, but not ipsilateral to the side of TMS, when pulses were delivered at 200 ms into the retention interval. A behavioral version of the TMS experiment, in which a distractor stimulus (memory mask) replaced the TMS pulses, further corroborated these findings. Our findings suggest that retinotopic visual cortex contributes to the short-term consolidation of topographic visual memory during early stages of the retention of visual information. Further, TMS-induced interference decreased the strength (amplitude) of the memory representation, which most strongly affected the high memory load trials.

  5. Climate Data Service in the FP7 EarthServer Project

    NASA Astrophysics Data System (ADS)

    Mantovani, Simone; Natali, Stefano; Barboni, Damiano; Grazia Veratelli, Maria

    2013-04-01

    EarthServer is a European Framework Program project that aims at developing and demonstrating the usability of open standards (OGC and W3C) in the management of multi-source, any-size, multi-dimensional spatio-temporal data - in short: "Big Earth Data Analytics". In order to demonstrate the feasibility of the approach, six thematic Lighthouse Applications (Cryospheric Science, Airborne Science, Atmospheric/ Climate Science, Geology, Oceanography, and Planetary Science), each with 100+ TB, are implemented. Scope of the Atmospheric/Climate lighthouse application (Climate Data Service) is to implement the system containing global to regional 2D / 3D / 4D datasets retrieved either from satellite observations, from numerical modelling and in-situ observations. Data contained in the Climate Data Service regard atmospheric profiles of temperature / humidity, aerosol content, AOT, and cloud properties provided by entities such as the European Centre for Mesoscale Weather Forecast (ECMWF), the Austrian Meteorological Service (Zentralanstalt für Meteorologie und Geodynamik - ZAMG), the Italian National Agency for new technologies, energies and sustainable development (ENEA), and the Sweden's Meteorological and Hydrological Institute (Sveriges Meteorologiska och Hydrologiska Institut -- SMHI). The system, through an easy-to-use web application permits to browse the loaded data, visualize their temporal evolution on a specific point with the creation of 2D graphs of a single field, or compare different fields on the same point (e.g. temperatures from different models and satellite observations), and visualize maps of specific fields superimposed with high resolution background maps. All data access operations and display are performed by means of OGC standard operations namely WMS, WCS and WCPS. The EarthServer project has just started its second year over a 3-years development plan: the present status the system contains subsets of the final database, with the scope of demonstrating I/O modules and visualization tools. At the end of the project all datasets will be available to the users.

  6. Functional correlates of musical and visual ability in frontotemporal dementia.

    PubMed

    Miller, B L; Boone, K; Cummings, J L; Read, S L; Mishkin, F

    2000-05-01

    The emergence of new skills in the setting of dementia suggests that loss of function in one brain area can release new functions elsewhere. To characterise 12 patients with frontotemporal dementia (FTD) who acquired, or sustained, new musical or visual abilities despite progression of their dementia. Twelve patients with FTD who acquired or maintained musical or artistic ability were compared with 46 patients with FTD in whom new or sustained ability was absent. The group with musical or visual ability performed better on visual, but worse on verbal tasks than did the other patients with FTD. Nine had asymmetrical left anterior dysfunction. Nine showed the temporal lobe variant of FTD. Loss of function in the left anterior temporal lobe may lead to facilitation of artistic or musical skills. Patients with the left-sided temporal lobe variant of FTD offer an unexpected window into the neurological mediation of visual and musical talents.

  7. Disturbed temporal dynamics of brain synchronization in vision loss.

    PubMed

    Bola, Michał; Gall, Carolin; Sabel, Bernhard A

    2015-06-01

    Damage along the visual pathway prevents bottom-up visual input from reaching further processing stages and consequently leads to loss of vision. But perception is not a simple bottom-up process - rather it emerges from activity of widespread cortical networks which coordinate visual processing in space and time. Here we set out to study how vision loss affects activity of brain visual networks and how networks' activity is related to perception. Specifically, we focused on studying temporal patterns of brain activity. To this end, resting-state eyes-closed EEG was recorded from partially blind patients suffering from chronic retina and/or optic-nerve damage (n = 19) and healthy controls (n = 13). Amplitude (power) of oscillatory activity and phase locking value (PLV) were used as measures of local and distant synchronization, respectively. Synchronization time series were created for the low- (7-9 Hz) and high-alpha band (11-13 Hz) and analyzed with three measures of temporal patterns: (i) length of synchronized-/desynchronized-periods, (ii) Higuchi Fractal Dimension (HFD), and (iii) Detrended Fluctuation Analysis (DFA). We revealed that patients exhibit less complex, more random and noise-like temporal dynamics of high-alpha band activity. More random temporal patterns were associated with worse performance in static (r = -.54, p = .017) and kinetic perimetry (r = .47, p = .041). We conclude that disturbed temporal patterns of neural synchronization in vision loss patients indicate disrupted communication within brain visual networks caused by prolonged deafferentation. We propose that because the state of brain networks is essential for normal perception, impaired brain synchronization in patients with vision loss might aggravate the functional consequences of reduced visual input. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Similarity-Based Fusion of MEG and fMRI Reveals Spatio-Temporal Dynamics in Human Cortex During Visual Object Recognition

    PubMed Central

    Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude

    2016-01-01

    Every human cognitive function, such as visual object recognition, is realized in a complex spatio-temporal activity pattern in the brain. Current brain imaging techniques in isolation cannot resolve the brain's spatio-temporal dynamics, because they provide either high spatial or temporal resolution but not both. To overcome this limitation, we developed an integration approach that uses representational similarities to combine measurements of magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) to yield a spatially and temporally integrated characterization of neuronal activation. Applying this approach to 2 independent MEG–fMRI data sets, we observed that neural activity first emerged in the occipital pole at 50–80 ms, before spreading rapidly and progressively in the anterior direction along the ventral and dorsal visual streams. Further region-of-interest analyses established that dorsal and ventral regions showed MEG–fMRI correspondence in representations later than early visual cortex. Together, these results provide a novel and comprehensive, spatio-temporally resolved view of the rapid neural dynamics during the first few hundred milliseconds of object vision. They further demonstrate the feasibility of spatially unbiased representational similarity-based fusion of MEG and fMRI, promising new insights into how the brain computes complex cognitive functions. PMID:27235099

  9. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Neural dynamics of reward probability coding: a Magnetoencephalographic study in humans

    PubMed Central

    Thomas, Julie; Vanni-Mercier, Giovanna; Dreher, Jean-Claude

    2013-01-01

    Prediction of future rewards and discrepancy between actual and expected outcomes (prediction error) are crucial signals for adaptive behavior. In humans, a number of fMRI studies demonstrated that reward probability modulates these two signals in a large brain network. Yet, the spatio-temporal dynamics underlying the neural coding of reward probability remains unknown. Here, using magnetoencephalography, we investigated the neural dynamics of prediction and reward prediction error computations while subjects learned to associate cues of slot machines with monetary rewards with different probabilities. We showed that event-related magnetic fields (ERFs) arising from the visual cortex coded the expected reward value 155 ms after the cue, demonstrating that reward value signals emerge early in the visual stream. Moreover, a prediction error was reflected in ERF peaking 300 ms after the rewarded outcome and showing decreasing amplitude with higher reward probability. This prediction error signal was generated in a network including the anterior and posterior cingulate cortex. These findings pinpoint the spatio-temporal characteristics underlying reward probability coding. Together, our results provide insights into the neural dynamics underlying the ability to learn probabilistic stimuli-reward contingencies. PMID:24302894

  11. Saliency affects feedforward more than feedback processing in early visual cortex.

    PubMed

    Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony

    2013-07-01

    Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Neurovision processor for designing intelligent sensors

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1992-03-01

    A programmable multi-task neuro-vision processor, called the Positive-Negative (PN) neural processor, is proposed as a plausible hardware mechanism for constructing robust multi-task vision sensors. The computational operations performed by the PN neural processor are loosely based on the neural activity fields exhibited by certain nervous tissue layers situated in the brain. The neuro-vision processor can be programmed to generate diverse dynamic behavior that may be used for spatio-temporal stabilization (STS), short-term visual memory (STVM), spatio-temporal filtering (STF) and pulse frequency modulation (PFM). A multi- functional vision sensor that performs a variety of information processing operations on time- varying two-dimensional sensory images can be constructed from a parallel and hierarchical structure of numerous individually programmed PN neural processors.

  13. Attentional Episodes in Visual Perception

    ERIC Educational Resources Information Center

    Wyble, Brad; Potter, Mary C.; Bowman, Howard; Nieuwenstein, Mark

    2011-01-01

    Is one's temporal perception of the world truly as seamless as it appears? This article presents a computationally motivated theory suggesting that visual attention samples information from temporal episodes (episodic simultaneous type/serial token model; Wyble, Bowman, & Nieuwenstein, 2009). Breaks between these episodes are punctuated by periods…

  14. Dynamic spatial organization of the occipito-temporal word form area for second language processing.

    PubMed

    Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li

    2017-08-01

    Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017. Published by Elsevier Ltd.

  15. Duration estimates within a modality are integrated sub-optimally

    PubMed Central

    Cai, Ming Bo; Eagleman, David M.

    2015-01-01

    Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task. PMID:26321965

  16. Spatiotemporal proximity effects in visual short-term memory examined by target-nontarget analysis.

    PubMed

    Sapkota, Raju P; Pardhan, Shahina; van der Linde, Ian

    2016-08-01

    Visual short-term memory (VSTM) is a limited-capacity system that holds a small number of objects online simultaneously, implying that competition for limited storage resources occurs (Phillips, 1974). How the spatial and temporal proximity of stimuli affects this competition is unclear. In this 2-experiment study, we examined the effect of the spatial and temporal separation of real-world memory targets and erroneously selected nontarget items examined during location-recognition and object-recall tasks. In Experiment 1 (the location-recognition task), our test display comprised either the picture or name of 1 previously examined memory stimulus (rendered above as the stimulus-display area), together with numbered square boxes at each of the memory-stimulus locations used in that trial. Participants were asked to report the number inside the square box corresponding to the location at which the cued object was originally presented. In Experiment 2 (the object-recall task), the test display comprised a single empty square box presented at 1 memory-stimulus location. Participants were asked to report the name of the object presented at that location. In both experiments, nontarget objects that were spatially and temporally proximal to the memory target were confused more often than nontarget objects that were spatially and temporally distant (i.e., a spatiotemporal proximity effect); this effect generalized across memory tasks, and the object feature (picture or name) that cued the test-display memory target. Our findings are discussed in terms of spatial and temporal confusion "fields" in VSTM, wherein objects occupy diffuse loci in a spatiotemporal coordinate system, wherein neighboring locations are more susceptible to confusion. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  17. From neurons to circuits: linear estimation of local field potentials.

    PubMed

    Rasch, Malte; Logothetis, Nikos K; Kreiman, Gabriel

    2009-11-04

    Extracellular physiological recordings are typically separated into two frequency bands: local field potentials (LFPs) (a circuit property) and spiking multiunit activity (MUA). Recently, there has been increased interest in LFPs because of their correlation with functional magnetic resonance imaging blood oxygenation level-dependent measurements and the possibility of studying local processing and neuronal synchrony. To further understand the biophysical origin of LFPs, we asked whether it is possible to estimate their time course based on the spiking activity from the same electrode or nearby electrodes. We used "signal estimation theory" to show that a linear filter operation on the activity of one or a few neurons can explain a significant fraction of the LFP time course in the macaque monkey primary visual cortex. The linear filter used to estimate the LFPs had a stereotypical shape characterized by a sharp downstroke at negative time lags and a slower positive upstroke for positive time lags. The filter was similar across different neocortical regions and behavioral conditions, including spontaneous activity and visual stimulation. The estimations had a spatial resolution of approximately 1 mm and a temporal resolution of approximately 200 ms. By considering a causal filter, we observed a temporal asymmetry such that the positive time lags in the filter contributed more to the LFP estimation than the negative time lags. Additionally, we showed that spikes occurring within approximately 10 ms of spikes from nearby neurons yielded better estimation accuracies than nonsynchronous spikes. In summary, our results suggest that at least some circuit-level local properties of the field potentials can be predicted from the activity of one or a few neurons.

  18. Diagnostic Power of Macular Retinal Thickness Analysis and Structure-Function Relationship in Glaucoma Diagnosis Using SPECTRALIS OCT.

    PubMed

    Rolle, Teresa; Manerba, Linda; Lanzafame, Pietro; Grignolo, Federico M

    2016-05-01

    To evaluate the diagnostic power of the Posterior Pole Asymmetry Analysis (PPAA) from the SPECTRALIS OCT in glaucoma diagnosis and to define the correlation between the visual field sensitivity (VFS) and macular retinal thickness (MRT). 90 consecutive open-angle glaucoma patients and 23 healthy subjects were enrolled. All subjects underwent Visual Field test (Humphrey Field Analyzer, central 24-2 SITA-Standard) and SD-OCT volume scans (SPECTRALIS, Posterior Pole Asymmetry Analysis). The areas under the Receiving Operating Characteristic curve (AROC) were calculated to assess discriminating power for glaucoma, at first considering total MRT values and hemisphere MRT value and then quadrant MRT values from 16 square cells in a 8 x 8 posterior pole retinal thickness map that were averaged for a mean retinal thickness value. Structure function correlation was performed for total values, hemisphere values and for each quadrant compared to the matching central test points of the VF. The AROCs ranged from 0.70 to 0.82 (p < 0.0001), with no significant differences between each other. The highest AROC observed was in inferior nasal quadrant. The VFS showed a strong correlation only with the corresponding MRT value s for quadrant analysis: Superior Temporal (r = 0.33, p = 0.0013), Superior Nasal (r = 0.43, p < 0.0001), Inferior Temporal (r = 0.57, p < 0.0001) and Inferior Nasal (r = 0.55, p < 0.0001). the quadrant analysis showed statistically significant structure-function correlations and may provide additional data for the diagnostic performance of SPECTRALIS OCT.

  19. From neurons to circuits: linear estimation of local field potentials

    PubMed Central

    Rasch, Malte; Logthetis, Nikos K.; Kreiman, Gabriel

    2010-01-01

    Extracellular physiological recordings are typically separated into two frequency bands: local field potentials (LFPs, a circuit property) and spiking multi-unit activity (MUA). There has been increased interest in LFPs due to their correlation with fMRI measurements and the possibility of studying local processing and neuronal synchrony. To further understand the biophysical origin of LFPs, we asked whether it is possible to estimate their time course based on the spiking activity from the same or nearby electrodes. We used Signal Estimation Theory to show that a linear filter operation on the activity of one/few neurons can explain a significant fraction of the LFP time course in the macaque primary visual cortex. The linear filter used to estimate the LFPs had a stereotypical shape characterized by a sharp downstroke at negative time lags and a slower positive upstroke for positve time lags. The filter was similar across neocortical regions and behavioral conditions including spontaneous activity and visual stimulation. The estimations had a spatial resolution of ~1 mm and a temporal resolution of ~200 ms. By considering a causal filter, we observed a temporal asymmetry such that the positive time lags in the filter contributed more to the LFP estimation than negative time lags. Additionally, we showed that spikes occurring within ~10 ms of spikes from nearby neurons yielded better estimation accuracies than nonsynchronous spikes. In sum, our results suggest that at least some circuit-level local properties of the field potentials can be predicted from the activity of one or a few neurons. PMID:19889990

  20. Mate choice in the eye and ear of the beholder? Female multimodal sensory configuration influences her preferences.

    PubMed

    Ronald, Kelly L; Fernández-Juricic, Esteban; Lucas, Jeffrey R

    2018-05-16

    A common assumption in sexual selection studies is that receivers decode signal information similarly. However, receivers may vary in how they rank signallers if signal perception varies with an individual's sensory configuration. Furthermore, receivers may vary in their weighting of different elements of multimodal signals based on their sensory configuration. This could lead to complex levels of selection on signalling traits. We tested whether multimodal sensory configuration could affect preferences for multimodal signals. We used brown-headed cowbird ( Molothrus ater ) females to examine how auditory sensitivity and auditory filters, which influence auditory spectral and temporal resolution, affect song preferences, and how visual spatial resolution and visual temporal resolution, which influence resolution of a moving visual signal, affect visual display preferences. Our results show that multimodal sensory configuration significantly affects preferences for male displays: females with better auditory temporal resolution preferred songs that were shorter, with lower Wiener entropy, and higher frequency; and females with better visual temporal resolution preferred males with less intense visual displays. Our findings provide new insights into mate-choice decisions and receiver signal processing. Furthermore, our results challenge a long-standing assumption in animal communication which can affect how we address honest signalling, assortative mating and sensory drive. © 2018 The Author(s).

  1. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  2. A functional magnetic resonance imaging study mapping the episodic memory encoding network in temporal lobe epilepsy

    PubMed Central

    Sidhu, Meneka K.; Stretton, Jason; Winston, Gavin P.; Bonelli, Silvia; Centeno, Maria; Vollmar, Christian; Symms, Mark; Thompson, Pamela J.; Koepp, Matthias J.

    2013-01-01

    Functional magnetic resonance imaging has demonstrated reorganization of memory encoding networks within the temporal lobe in temporal lobe epilepsy, but little is known of the extra-temporal networks in these patients. We investigated the temporal and extra-temporal reorganization of memory encoding networks in refractory temporal lobe epilepsy and the neural correlates of successful subsequent memory formation. We studied 44 patients with unilateral temporal lobe epilepsy and hippocampal sclerosis (24 left) and 26 healthy control subjects. All participants performed a functional magnetic resonance imaging memory encoding paradigm of faces and words with subsequent out-of-scanner recognition assessments. A blocked analysis was used to investigate activations during encoding and neural correlates of subsequent memory were investigated using an event-related analysis. Event-related activations were then correlated with out-of-scanner verbal and visual memory scores. During word encoding, control subjects activated the left prefrontal cortex and left hippocampus whereas patients with left hippocampal sclerosis showed significant additional right temporal and extra-temporal activations. Control subjects displayed subsequent verbal memory effects within left parahippocampal gyrus, left orbitofrontal cortex and fusiform gyrus whereas patients with left hippocampal sclerosis activated only right posterior hippocampus, parahippocampus and fusiform gyrus. Correlational analysis showed that patients with left hippocampal sclerosis with better verbal memory additionally activated left orbitofrontal cortex, anterior cingulate cortex and left posterior hippocampus. During face encoding, control subjects showed right lateralized prefrontal cortex and bilateral hippocampal activations. Patients with right hippocampal sclerosis showed increased temporal activations within the superior temporal gyri bilaterally and no increased extra-temporal areas of activation compared with control subjects. Control subjects showed subsequent visual memory effects within right amygdala, hippocampus, fusiform gyrus and orbitofrontal cortex. Patients with right hippocampal sclerosis showed subsequent visual memory effects within right posterior hippocampus, parahippocampal and fusiform gyri, and predominantly left hemisphere extra-temporal activations within the insula and orbitofrontal cortex. Correlational analysis showed that patients with right hippocampal sclerosis with better visual memory activated the amygdala bilaterally, right anterior parahippocampal gyrus and left insula. Right sided extra-temporal areas of reorganization observed in patients with left hippocampal sclerosis during word encoding and bilateral lateral temporal reorganization in patients with right hippocampal sclerosis during face encoding were not associated with subsequent memory formation. Reorganization within the medial temporal lobe, however, is an efficient process. The orbitofrontal cortex is critical to subsequent memory formation in control subjects and patients. Activations within anterior cingulum and insula correlated with better verbal and visual subsequent memory in patients with left and right hippocampal sclerosis, respectively, representing effective extra-temporal recruitment. PMID:23674488

  3. Auditory perception and the control of spatially coordinated action of deaf and hearing children.

    PubMed

    Savelsbergh, G J; Netelenbos, J B; Whiting, H T

    1991-03-01

    From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.

  4. Earlier Visual N1 Latencies in Expert Video-Game Players: A Temporal Basis of Enhanced Visuospatial Performance?

    PubMed Central

    Latham, Andrew J.; Patston, Lucy L. M.; Westermann, Christine; Kirk, Ian J.; Tippett, Lynette J.

    2013-01-01

    Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction. PMID:24058667

  5. Earlier visual N1 latencies in expert video-game players: a temporal basis of enhanced visuospatial performance?

    PubMed

    Latham, Andrew J; Patston, Lucy L M; Westermann, Christine; Kirk, Ian J; Tippett, Lynette J

    2013-01-01

    Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction.

  6. Spatial organization and time dependence of Jupiter's tropospheric temperatures, 1980-1993

    NASA Technical Reports Server (NTRS)

    Orton, Glenn S.; Friedson, A. James; Yanamandra-Fisher, Padmavati A.; Caldwell, John; Hammel, Heidi B.; Baines, Kevin H.; Bergstralh, Jay T.; Martin, Terry Z.; West, Robert A.; Veeder, Glenn J., Jr.

    1994-01-01

    The spatial organization and time dependence of Jupiter's temperature near 250-millibar pressure were measured through a jovian year by imaging thermal emission at 18 micrometers. The temperature field is influenced by seasonal radiative forcing, and its banded organization is closely correlated with the visible cloud field. Evidence was found for a quasi-periodic oscillation of temperatures in the Equatorial Zone, a correlation between tropospheric and stratospheric waves in the North Equatorial Belt, and slowly moving thermal features in the North and South Equatorial Belts. There appears to be no common relation between temporal changes of temperature and changes in the visual albedo of the various axisymmetric bands.

  7. Quantifying temporal glucose variability in diabetes via continuous glucose monitoring: mathematical methods and clinical application.

    PubMed

    Kovatchev, Boris P; Clarke, William L; Breton, Marc; Brayman, Kenneth; McCall, Anthony

    2005-12-01

    Continuous glucose monitors (CGMs) collect detailed blood glucose (BG) time series, which carry significant information about the dynamics of BG fluctuations. In contrast, the methods for analysis of CGM data remain those developed for infrequent BG self-monitoring. As a result, important information about the temporal structure of the data is lost during the translation of raw sensor readings into clinically interpretable statistics and images. The following mathematical methods are introduced into the field of CGM data interpretation: (1) analysis of BG rate of change; (2) risk analysis using previously reported Low/High BG Indices and Poincare (lag) plot of risk associated with temporal BG variability; and (3) spatial aggregation of the process of BG fluctuations and its Markov chain visualization. The clinical application of these methods is illustrated by analysis of data of a patient with Type 1 diabetes mellitus who underwent islet transplantation and with data from clinical trials. Normative data [12,025 reference (YSI device, Yellow Springs Instruments, Yellow Springs, OH) BG determinations] in patients with Type 1 diabetes mellitus who underwent insulin and glucose challenges suggest that the 90%, 95%, and 99% confidence intervals of BG rate of change that could be maximally sustained over 15-30 min are [-2,2], [-3,3], and [-4,4] mg/dL/min, respectively. BG dynamics and risk parameters clearly differentiated the stages of transplantation and the effects of medication. Aspects of treatment were clearly visualized by graphs of BG rate of change and Low/High BG Indices, by a Poincare plot of risk for rapid BG fluctuations, and by a plot of the aggregated Markov process. Advanced analysis and visualization of CGM data allow for evaluation of dynamical characteristics of diabetes and reveal clinical information that is inaccessible via standard statistics, which do not take into account the temporal structure of the data. The use of such methods improves the assessment of patients' glycemic control.

  8. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  9. Temporal Visualization for Legal Case Histories.

    ERIC Educational Resources Information Center

    Harris, Chanda; Allen, Robert B.; Plaisant, Catherine; Shneiderman, Ben

    1999-01-01

    Discusses visualization of legal information using a tool for temporal information called "LifeLines." Explores ways "LifeLines" could aid in viewing the links between original case and direct and indirect case histories. Uses the case of Apple Computer, Inc. versus Microsoft Corporation and Hewlett Packard Company to…

  10. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    PubMed Central

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  11. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  12. Navigational strategies underlying phototaxis in larval zebrafish.

    PubMed

    Chen, Xiuye; Engert, Florian

    2014-01-01

    Understanding how the brain transforms sensory input into complex behavior is a fundamental question in systems neuroscience. Using larval zebrafish, we study the temporal component of phototaxis, which is defined as orientation decisions based on comparisons of light intensity at successive moments in time. We developed a novel "Virtual Circle" assay where whole-field illumination is abruptly turned off when the fish swims out of a virtually defined circular border, and turned on again when it returns into the circle. The animal receives no direct spatial cues and experiences only whole-field temporal light changes. Remarkably, the fish spends most of its time within the invisible virtual border. Behavioral analyses of swim bouts in relation to light transitions were used to develop four discrete temporal algorithms that transform the binary visual input (uniform light/uniform darkness) into the observed spatial behavior. In these algorithms, the turning angle is dependent on the behavioral history immediately preceding individual turning events. Computer simulations show that the algorithms recapture most of the swim statistics of real fish. We discovered that turning properties in larval zebrafish are distinctly modulated by temporal step functions in light intensity in combination with the specific motor history preceding these turns. Several aspects of the behavior suggest memory usage of up to 10 swim bouts (~10 sec). Thus, we show that a complex behavior like spatial navigation can emerge from a small number of relatively simple behavioral algorithms.

  13. Mapping the spatial patterns of field traffic and traffic intensity to predict soil compaction risks at the field scale

    NASA Astrophysics Data System (ADS)

    Duttmann, Rainer; Kuhwald, Michael; Nolde, Michael

    2015-04-01

    Soil compaction is one of the main threats to cropland soils in present days. In contrast to easily visible phenomena of soil degradation, soil compaction, however, is obscured by other signals such as reduced crop yield, delayed crop growth, and the ponding of water, which makes it difficult to recognize and locate areas impacted by soil compaction directly. Although it is known that trafficking intensity is a key factor for soil compaction, until today only modest work has been concerned with the mapping of the spatially distributed patterns of field traffic and with the visual representation of the loads and pressures applied by farm traffic within single fields. A promising method for for spatial detection and mapping of soil compaction risks of individual fields is to process dGPS data, collected from vehicle-mounted GPS receivers and to compare the soil stress induced by farm machinery to the load bearing capacity derived from given soil map data. The application of position-based machinery data enables the mapping of vehicle movements over time as well as the assessment of trafficking intensity. It also facilitates the calculation of the trafficked area and the modeling of the loads and pressures applied to soil by individual vehicles. This paper focuses on the modeling and mapping of the spatial patterns of traffic intensity in silage maize fields during harvest, considering the spatio-temporal changes in wheel load and ground contact pressure along the loading sections. In addition to scenarios calculated for varying mechanical soil strengths, an example for visualizing the three-dimensional stress propagation inside the soil will be given, using the Visualization Toolkit (VTK) to construct 2D or 3D maps supporting to decision making due to sustainable field traffic management.

  14. The Pivotal Role of the Right Parietal Lobe in Temporal Attention.

    PubMed

    Agosta, Sara; Magnago, Denise; Tyler, Sarah; Grossman, Emily; Galante, Emanuela; Ferraro, Francesco; Mazzini, Nunzia; Miceli, Gabriele; Battelli, Lorella

    2017-05-01

    The visual system is extremely efficient at detecting events across time even at very fast presentation rates; however, discriminating the identity of those events is much slower and requires attention over time, a mechanism with a much coarser resolution [Cavanagh, P., Battelli, L., & Holcombe, A. O. Dynamic attention. In A. C. Nobre & S. Kastner (Eds.), The Oxford handbook of attention (pp. 652-675). Oxford: Oxford University Press, 2013]. Patients affected by right parietal lesion, including the TPJ, are severely impaired in discriminating events across time in both visual fields [Battelli, L., Cavanagh, P., & Thornton, I. M. Perception of biological motion in parietal patients. Neuropsychologia, 41, 1808-1816, 2003]. One way to test this ability is to use a simultaneity judgment task, whereby participants are asked to indicate whether two events occurred simultaneously or not. We psychophysically varied the frequency rate of four flickering disks, and on most of the trials, one disk (either in the left or right visual field) was flickering out-of-phase relative to the others. We asked participants to report whether two left-or-right-presented disks were simultaneous or not. We tested a total of 23 right and left parietal lesion patients in Experiment 1, and only right parietal patients showed impairment in both visual fields while their low-level visual functions were normal. Importantly, to causally link the right TPJ to the relative timing processing, we ran a TMS experiment on healthy participants. Participants underwent three stimulation sessions and performed the same simultaneity judgment task before and after 20 min of low-frequency inhibitory TMS over right TPJ, left TPJ, or early visual area as a control. rTMS over the right TPJ caused a bilateral impairment in the simultaneity judgment task, whereas rTMS over left TPJ or over early visual area did not affect performance. Altogether, our results directly link the right TPJ to the processing of relative time.

  15. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration.

    PubMed

    Ikumi, Nara; Soto-Faraco, Salvador

    2016-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands.

  16. Grouping and Segregation of Sensory Events by Actions in Temporal Audio-Visual Recalibration

    PubMed Central

    Ikumi, Nara; Soto-Faraco, Salvador

    2017-01-01

    Perception in multi-sensory environments involves both grouping and segregation of events across sensory modalities. Temporal coincidence between events is considered a strong cue to resolve multisensory perception. However, differences in physical transmission and neural processing times amongst modalities complicate this picture. This is illustrated by cross-modal recalibration, whereby adaptation to audio-visual asynchrony produces shifts in perceived simultaneity. Here, we examined whether voluntary actions might serve as a temporal anchor to cross-modal recalibration in time. Participants were tested on an audio-visual simultaneity judgment task after an adaptation phase where they had to synchronize voluntary actions with audio-visual pairs presented at a fixed asynchrony (vision leading or vision lagging). Our analysis focused on the magnitude of cross-modal recalibration to the adapted audio-visual asynchrony as a function of the nature of the actions during adaptation, putatively fostering cross-modal grouping or, segregation. We found larger temporal adjustments when actions promoted grouping than segregation of sensory events. However, a control experiment suggested that additional factors, such as attention to planning/execution of actions, could have an impact on recalibration effects. Contrary to the view that cross-modal temporal organization is mainly driven by external factors related to the stimulus or environment, our findings add supporting evidence for the idea that perceptual adjustments strongly depend on the observer's inner states induced by motor and cognitive demands. PMID:28154529

  17. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  18. Visual search of cyclic spatio-temporal events

    NASA Astrophysics Data System (ADS)

    Gautier, Jacques; Davoine, Paule-Annick; Cunty, Claire

    2018-05-01

    The analysis of spatio-temporal events, and especially of relationships between their different dimensions (space-time-thematic attributes), can be done with geovisualization interfaces. But few geovisualization tools integrate the cyclic dimension of spatio-temporal event series (natural events or social events). Time Coil and Time Wave diagrams represent both the linear time and the cyclic time. By introducing a cyclic temporal scale, these diagrams may highlight the cyclic characteristics of spatio-temporal events. However, the settable cyclic temporal scales are limited to usual durations like days or months. Because of that, these diagrams cannot be used to visualize cyclic events, which reappear with an unusual period, and don't allow to make a visual search of cyclic events. Also, they don't give the possibility to identify the relationships between the cyclic behavior of the events and their spatial features, and more especially to identify localised cyclic events. The lack of possibilities to represent the cyclic time, outside of the temporal diagram of multi-view geovisualization interfaces, limits the analysis of relationships between the cyclic reappearance of events and their other dimensions. In this paper, we propose a method and a geovisualization tool, based on the extension of Time Coil and Time Wave, to provide a visual search of cyclic events, by allowing to set any possible duration to the diagram's cyclic temporal scale. We also propose a symbology approach to push the representation of the cyclic time into the map, in order to improve the analysis of relationships between space and the cyclic behavior of events.

  19. Flow disturbance due to presence of the vane anemometer

    NASA Astrophysics Data System (ADS)

    Bujalski, M.; Gawor, M.; Sobczyk, J.

    2014-08-01

    This paper presents the results of the preliminary experimental investigations of the disturbance of velocity field resulting from placing a vane anemometer in the analyzed air flow. Experiments were conducted in a wind tunnel with a closed loop. For the measurement process, Particle Image Velocimetry (PIV) method was used to visualize the flow structure and evaluate the instantaneous, two-dimensional velocity vector fields. Regions of inflow on the vane anemometer as well as flow behind it were examined. Ensemble averaged velocity distribution and root-mean-square (RMS) velocity fluctuations were determined. The results below are presented in the form of contour-velocity maps and profile plots. In order to investigate velocity fluctuations in the wake of vane anemometer with high temporal resolution hot-wire anemometry (HWA) technique was used. Frequency analysis by means of Fast Fourier Transform was carried out. The obtained results give evidence to a significant spatially and temporally complex flow disturbance in the vicinity of analyzed instrument.

  20. Visualization of spatial-temporal data based on 3D virtual scene

    NASA Astrophysics Data System (ADS)

    Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang

    2009-10-01

    The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.

  1. Temporal information entropy of the Blood-Oxygenation Level-Dependent signals increases in the activated human primary visual cortex

    NASA Astrophysics Data System (ADS)

    DiNuzzo, Mauro; Mascali, Daniele; Moraschi, Marta; Bussu, Giorgia; Maraviglia, Bruno; Mangia, Silvia; Giove, Federico

    2017-02-01

    Time-domain analysis of blood-oxygenation level-dependent (BOLD) signals allows the identification of clusters of voxels responding to photic stimulation in primary visual cortex (V1). However, the characterization of information encoding into temporal properties of the BOLD signals of an activated cluster is poorly investigated. Here, we used Shannon entropy to determine spatial and temporal information encoding in the BOLD signal within the most strongly activated area of the human visual cortex during a hemifield photic stimulation. We determined the distribution profile of BOLD signals during epochs at rest and under stimulation within small (19-121 voxels) clusters designed to include only voxels driven by the stimulus as highly and uniformly as possible. We found consistent and significant increases (2-4% on average) in temporal information entropy during activation in contralateral but not ipsilateral V1, which was mirrored by an expected loss of spatial information entropy. These opposite changes coexisted with increases in both spatial and temporal mutual information (i.e. dependence) in contralateral V1. Thus, we showed that the first cortical stage of visual processing is characterized by a specific spatiotemporal rearrangement of intracluster BOLD responses. Our results indicate that while in the space domain BOLD maps may be incapable of capturing the functional specialization of small neuronal populations due to relatively low spatial resolution, some information encoding may still be revealed in the temporal domain by an increase of temporal information entropy.

  2. Temporal processing dysfunction in schizophrenia.

    PubMed

    Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P

    2008-07-01

    Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.

  3. Geothermal Prospecting with Remote Sensing and Geographical Information System Technologies in Xilingol Volcanic Field in the Eastern Inner Mongolia, NE China

    NASA Astrophysics Data System (ADS)

    Peng, F.; Huang, S.; Xiong, Y.; Zhao, Y.; Cheng, Y.

    2013-05-01

    Geothermal energy is a renewable and low-carbon energy source independent of climate change. It is most abundant in Cenozoic volcanic areas where high temperature can be obtained within a relatively shallow depth. Like other geological resources, geothermal resource prospecting and exploration require a good understanding of the host media. Remote sensing (RS) has the advantages of high spatial and temporal resolution and broad spatial coverage over the conventional geological and geophysical prospecting, while geographical information system (GIS) has intuitive, flexible, and convenient characteristics. In this study, we apply RS and GIS technics in prospecting the geothermal energy potential in Xilingol, a Cenozoic volcanic field in the eastern Inner Mongolia, NE China. Landsat TM/ETM+ multi-temporal images taken under clear-sky conditions, digital elevation model (DEM) data, and other auxiliary data including geological maps of 1:2,500,000 and 1:200,000 scales are used in this study. The land surface temperature (LST) of the study area is retrieved from the Landsat images with the single-channel algorithm on the platform of ENVI developed by ITT Visual Information Solutions. Information of linear and circular geological structure is then extracted from the LST maps and compared to the existing geological data. Several useful technologies such as principal component analysis (PCA), vegetation suppression technique, multi-temporal comparative analysis, and 3D Surface View based on DEM data are used to further enable a better visual geologic interpretation with the Landsat imagery of Xilingol. The Preliminary results show that major faults in the study area are mainly NE and NNE oriented. Several major volcanism controlling faults and Cenozoic volcanic eruption centers have been recognized from the linear and circular structures in the remote images. Seven areas have been identified as potential targets for further prospecting geothermal energy based on the visual interpretation of the geological structures. The study shows that GIS and RS have great application potential in the geothermal exploration in volcanic areas and will promote the exploration of renewable energy resources of great potential.

  4. Temporal Influence on Awareness

    DTIC Science & Technology

    1995-12-01

    43 38. Test Setup Timing: Measured vs Expected Modal Delays (in ms) ............. 46 39. Experiment I: visual and auditory stimuli...presented simultaneously; visual- auditory delay=Oms, visual-visual delay=0ms ....... .......................... 47 40. Experiment II: visual and auditory ...stimuli presented in order; visual- auditory de- lay=Oms, visual-visual delay=variable ................................ 48 41. Experiment II: visual and

  5. Effects of Spatio-Temporal Aliasing on Pilot Performance in Active Control Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Peter; Sweet, Barbara

    2010-01-01

    Spatio-temporal aliasing affects pilot performance and control behavior. For increasing refresh rates: 1) Significant change in control behavior: a) Increase in visual gain and neuromuscular frequency. b) Decrease in visual time delay. 2) Increase in tracking performance: a) Decrease in RMSe. b) Increase in crossover frequency.

  6. Visualizing Interaction Patterns in Online Discussions and Indices of Cognitive Presence

    ERIC Educational Resources Information Center

    Gibbs, William J.

    2006-01-01

    This paper discusses Mapping Temporal Relations of Discussions Software (MTRDS), a Web-based application that visually represents the temporal relations of online discussions. MTRDS was used to observe interaction characteristics of three online discussions. In addition, the research employed the Practical Inquiry Model to identify indices of…

  7. Perceptual training yields rapid improvements in visually impaired youth.

    PubMed

    Nyquist, Jeffrey B; Lappin, Joseph S; Zhang, Ruyuan; Tadin, Duje

    2016-11-30

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events.

  8. Fixational Eye Movements in the Earliest Stage of Metazoan Evolution

    PubMed Central

    Bielecki, Jan; Høeg, Jens T.; Garm, Anders

    2013-01-01

    All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur. PMID:23776673

  9. Fixational eye movements in the earliest stage of metazoan evolution.

    PubMed

    Bielecki, Jan; Høeg, Jens T; Garm, Anders

    2013-01-01

    All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.

  10. Emergence of artistic talent in frontotemporal dementia.

    PubMed

    Miller, B L; Cummings, J; Mishkin, F; Boone, K; Prince, F; Ponton, M; Cotman, C

    1998-10-01

    To describe the clinical, neuropsychological, and imaging features of five patients with frontotemporal dementia (FTD) who acquired new artistic skills in the setting of dementia. Creativity in the setting of dementia has recently been reported. We describe five patients who became visual artists in the setting of FTD. Sixty-nine FTD patients were interviewed regarding visual abilities. Five became artists in the early stages of FTD. Their history, artistic process, neuropsychology, and anatomy are described. On SPECT or pathology, four of the five patients had the temporal variant of FTD in which anterior temporal lobes are involved but the dorsolateral frontal cortex is spared. Visual skills were spared but language and social skills were devastated. Loss of function in the anterior temporal lobes may lead to the "facilitation" of artistic skills. Patients with the temporal lobe variant of FTD offer a window into creativity.

  11. Functional and morphological assessment of ocular structures and follow-up of patients with early-stage Parkinson's disease.

    PubMed

    Hasanov, Samir; Demirkilinc Biler, Elif; Acarer, Ahmet; Akkın, Cezmi; Colakoglu, Zafer; Uretmen, Onder

    2018-05-09

    To evaluate and follow-up of functional and morphological changes of the optic nerve and ocular structures prospectively in patients with early-stage Parkinson's disease. Nineteen patients with a diagnosis of early-stage Parkinson's disease and 19 age-matched healthy controls were included in the study. All participants were examined minimum three times at the intervals of at least 6 month following initial examination. Pattern visually evoked potentials (VEP), contrast sensitivity assessments at photopic conditions, color vision tests with Ishihara cards and full-field visual field tests were performed in addition to measurement of retinal nerve fiber layer (RNFL) thickness of four quadrants (top, bottom, nasal, temporal), central and mean macular thickness and macular volumes. Best corrected visual acuity was observed significantly lower in study group within all three examinations. Contrast sensitivity values of the patient group were significantly lower in all spatial frequencies. P100 wave latency of VEP was significantly longer, and amplitude was lower in patient group; however, significant deterioration was not observed during the follow-up. Although average peripapillary RNFL thickness was not significant between groups, RNFL thickness in the upper quadrant was thinner in the patient group. While there was no difference in terms of mean macular thickness and total macular volume values between the groups initially, a significant decrease occurred in the patient group during the follow-up. During the initial and follow-up process, a significant deterioration in visual field was observed in the patient group. Structural and functional disorders shown as electro-physiologically and morphologically exist in different parts of visual pathways in early-stage Parkinson's disease.

  12. High-resolution imaging of retinal nerve fiber bundles in glaucoma using adaptive optics scanning laser ophthalmoscopy.

    PubMed

    Takayama, Kohei; Ooto, Sotaro; Hangai, Masanori; Ueda-Arakawa, Naoko; Yoshida, Sachiko; Akagi, Tadamichi; Ikeda, Hanako Ohashi; Nonaka, Atsushi; Hanebuchi, Masaaki; Inoue, Takashi; Yoshimura, Nagahisa

    2013-05-01

    To detect pathologic changes in retinal nerve fiber bundles in glaucomatous eyes seen on images obtained by adaptive optics (AO) scanning laser ophthalmoscopy (AO SLO). Prospective cross-sectional study. Twenty-eight eyes of 28 patients with open-angle glaucoma and 21 normal eyes of 21 volunteer subjects underwent a full ophthalmologic examination, visual field testing using a Humphrey Field Analyzer, fundus photography, red-free SLO imaging, spectral-domain optical coherence tomography, and imaging with an original prototype AO SLO system. The AO SLO images showed many hyperreflective bundles suggesting nerve fiber bundles. In glaucomatous eyes, the nerve fiber bundles were narrower than in normal eyes, and the nerve fiber layer thickness was correlated with the nerve fiber bundle widths on AO SLO (P < .001). In the nerve fiber layer defect area on fundus photography, the nerve fiber bundles on AO SLO were narrower compared with those in normal eyes (P < .001). At 60 degrees on the inferior temporal side of the optic disc, the nerve fiber bundle width was significantly lower, even in areas without nerve fiber layer defect, in eyes with glaucomatous eyes compared with normal eyes (P = .026). The mean deviations of each cluster in visual field testing were correlated with the corresponding nerve fiber bundle widths (P = .017). AO SLO images showed reduced nerve fiber bundle widths both in clinically normal and abnormal areas of glaucomatous eyes, and these abnormalities were associated with visual field defects, suggesting that AO SLO may be useful for detecting early nerve fiber bundle abnormalities associated with loss of visual function. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Resolution of spatial and temporal visual attention in infants with fragile X syndrome.

    PubMed

    Farzin, Faraz; Rivera, Susan M; Whitney, David

    2011-11-01

    Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal-parietal networks of the brain. The goal of the current study was to examine whether reduced resolution of spatial and/or temporal visual attention may underlie perceptual deficits related to fragile X syndrome. Eye tracking was used to psychophysically measure the limits of spatial and temporal attention in infants with fragile X syndrome and age-matched neurotypically developing infants. Results from these experiments revealed that infants with fragile X syndrome experience drastically reduced resolution of temporal attention in a genetic dose-sensitive manner, but have a spatial resolution of attention that is not impaired. Coarse temporal attention could have significant knock-on effects for the development of perceptual, cognitive and motor abilities in individuals with the disorder.

  14. Resolution of spatial and temporal visual attention in infants with fragile X syndrome

    PubMed Central

    Rivera, Susan M.; Whitney, David

    2011-01-01

    Fragile X syndrome is the most common cause of inherited intellectual impairment and the most common single-gene cause of autism. Individuals with fragile X syndrome present with a neurobehavioural phenotype that includes selective deficits in spatiotemporal visual perception associated with neural processing in frontal–parietal networks of the brain. The goal of the current study was to examine whether reduced resolution of spatial and/or temporal visual attention may underlie perceptual deficits related to fragile X syndrome. Eye tracking was used to psychophysically measure the limits of spatial and temporal attention in infants with fragile X syndrome and age-matched neurotypically developing infants. Results from these experiments revealed that infants with fragile X syndrome experience drastically reduced resolution of temporal attention in a genetic dose-sensitive manner, but have a spatial resolution of attention that is not impaired. Coarse temporal attention could have significant knock-on effects for the development of perceptual, cognitive and motor abilities in individuals with the disorder. PMID:22075522

  15. Modulation of human extrastriate visual processing by selective attention to colours and words.

    PubMed

    Nobre, A C; Allison, T; McCarthy, G

    1998-07-01

    The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.

  16. Orientation dependent modulation of apparent speed: a model based on the dynamics of feed-forward and horizontal connectivity in V1 cortex.

    PubMed

    Seriès, Peggy; Georges, Sébastien; Lorenceau, Jean; Frégnac, Yves

    2002-11-01

    Psychophysical and physiological studies suggest that long-range horizontal connections in primary visual cortex participate in spatial integration and contour processing. Until recently, little attention has been paid to their intrinsic temporal properties. Recent physiological studies indicate, however, that the propagation of activity through long-range horizontal connections is slow, with time scales comparable to the perceptual scales involved in motion processing. Using a simple model of V1 connectivity, we explore some of the implications of this slow dynamics. The model predicts that V1 responses to a stimulus in the receptive field can be modulated by a previous stimulation, a few milliseconds to a few tens of milliseconds before, in the surround. We analyze this phenomenon and its possible consequences on speed perception, as a function of the spatio-temporal configuration of the visual inputs (relative orientation, spatial separation, temporal interval between the elements, sequence speed). We show that the dynamical interactions between feed-forward and horizontal signals in V1 can explain why the perceived speed of fast apparent motion sequences strongly depends on the orientation of their elements relative to the motion axis and can account for the range of speed for which this perceptual effect occurs (Georges, Seriès, Frégnac and Lorenceau, this issue).

  17. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  19. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    PubMed

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated temporal lobe structures, which are resected during ATLR, more frequently than did verbal fluency. Controlling for auditory and visual input resulted in more left-lateralised activations. We hypothesise that these paradigms may be more predictive of postoperative language decline than verbal fluency fMRI. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. The role of the human pulvinar in visual attention and action: evidence from temporal-order judgment, saccade decision, and antisaccade tasks.

    PubMed

    Arend, Isabel; Machado, Liana; Ward, Robert; McGrath, Michelle; Ro, Tony; Rafal, Robert D

    2008-01-01

    The pulvinar nucleus of the thalamus has been considered as a key structure for visual attention functions (Grieve, K.L. et al. (2000). Trends Neurosci., 23: 35-39; Shipp, S. (2003). Philos. Trans. R. Soc. Lond. B Biol. Sci., 358(1438): 1605-1624). During the past several years, we have studied the role of the human pulvinar in visual attention and oculomotor behaviour by testing a small group of patients with unilateral pulvinar lesions. Here we summarize some of these findings, and present new evidence for the role of this structure in both eye movements and visual attention through two versions of a temporal-order judgment task and an antisaccade task. Pulvinar damage induces an ipsilesional bias in perceptual temporal-order judgments and in saccadic decision, and also increases the latency of antisaccades away from contralesional targets. The demonstration that pulvinar damage affects both attention and oculomotor behaviour highlights the role of this structure in the integration of visual and oculomotor signals and, more generally, its role in flexibly linking visual stimuli with context-specific motor responses.

  1. Digital holographic microscopy for detection of Trypanosoma cruzi parasites in fresh blood mounts

    NASA Astrophysics Data System (ADS)

    Romero, G. G.; Monaldi, A. C.; Alanís, E. E.

    2012-03-01

    An off-axis holographic microscope, in a transmission mode, calibrated to automatically detect the presence of Trypanosoma cruzi in blood is developed as an alternative diagnosis tool for Chagas disease. Movements of the microorganisms are detected by measuring the phase shift they produce on the transmitted wave front. A thin layer of blood infected by Trypanosoma cruzi parasites is examined in the holographic microscope, the images of the visual field being registered with a CCD camera. Two consecutive holograms of the same visual field are subtracted point by point and a phase contrast image of the resulting hologram is reconstructed by means of the angular spectrum propagation algorithm. This method enables the measurement of phase distributions corresponding to temporal differences between digital holograms in order to detect whether parasites are present or not. Experimental results obtained using this technique show that it is an efficient alternative that can be incorporated successfully as a part of a fully automatic system for detection and counting of this type of microorganisms.

  2. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  3. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA

    PubMed Central

    Wilbiks, Jonathan M. P.; Dyson, Benjamin J.

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790

  4. Magnifying visual target information and the role of eye movements in motor sequence learning.

    PubMed

    Massing, Matthias; Blandin, Yannick; Panzer, Stefan

    2016-01-01

    An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Aging and Visual Function of Military Pilots: A Review

    DTIC Science & Technology

    1982-08-01

    of the Soc. for Inf. Disp. 21:219- 227. 24. Ginsburg. A. P .. M. W. Cannon, R. Sekuler, D . Evans, C . Owsley, and P ... the Institute of Medicine. This work relates to Department of Navy Contract N0OOI48O- C - 0159 issued by the Office of Naval Research under Contract...loss with age in the temporal resolving power of the visual system. Temporally con- tiguous visual events that would be seen as separate

  6. Spatio-Temporal Story Mapping Animation Based On Structured Causal Relationships Of Historical Events

    NASA Astrophysics Data System (ADS)

    Inoue, Y.; Tsuruoka, K.; Arikawa, M.

    2014-04-01

    In this paper, we proposed a user interface that displays visual animations on geographic maps and timelines for depicting historical stories by representing causal relationships among events for time series. We have been developing an experimental software system for the spatial-temporal visualization of historical stories for tablet computers. Our proposed system makes people effectively learn historical stories using visual animations based on hierarchical structures of different scale timelines and maps.

  7. Latencies in BOLD response during visual attention processes.

    PubMed

    Kellermann, Thilo; Reske, Martina; Jansen, Andreas; Satrapi, Peyman; Shah, N Jon; Schneider, Frank; Habel, Ute

    2011-04-22

    One well-investigated division of attentional processes focuses on alerting, orienting and executive control, which can be assessed applying the attentional network test (ANT). The goal of the present study was to add further knowledge about the temporal dynamics of relevant neural correlates. As a right hemispheric dominance for alerting and orienting has previously been reported for intrinsic but not for phasic alertness, we additionally addressed a potential impact of this lateralization of attention by employing a lateralized version of the ANT, capturing phasic alertness processes. Sixteen healthy subjects underwent event-related functional magnetic resonance imaging (fMRI) while performing the ANT. Analyses of BOLD magnitude replicated the engagement of a fronto-parietal network in the attentional subsystems. The amplitudes of the attentional contrasts interacted with visual field presentation in the sense that the thalamus revealed a greater involvement for spatially cued items presented in the left visual field. Comparisons of BOLD latencies in visual cortices, first, verified faster BOLD responses following contra-lateral stimulus presentation. Second and more importantly, we identified attention-modulated activation in secondary visual and anterior cingulate cortices. Results are discussed in terms of bottom-up and lateralization processes. Although intrinsic and phasic alertness are distinct cognitive processes, we propose that neural substrates of intrinsic alertness may be accessed by phasic alertness provided that the attention-dominant (i.e., the right) hemisphere is activated directly by a warning stimulus. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Short temporal asynchrony disrupts visual object recognition

    PubMed Central

    Singer, Jedediah M.; Kreiman, Gabriel

    2014-01-01

    Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738

  9. Clinical implications of parallel visual pathways.

    PubMed

    Bassi, C J; Lehmkuhle, S

    1990-02-01

    Visual information travels from the retina to visual cortical areas along at least two parallel pathways. In this paper, anatomical and physiological evidence is presented to demonstrate the existence of, and trace these two pathways throughout the visual systems of the cat, primate, and human. Physiological and behavioral experiments are discussed which establish that these two pathways are differentially sensitive to stimuli that vary in spatial and temporal frequency. One pathway (M-pathway) is more sensitive to coarse visual form that is modulated or moving at fast rates, whereas the other pathway (P-pathway) is more sensitive to spatial detail that is stationary or moving at slow rates. This difference between the M- and P-pathways is related to some spatial and temporal effects observed in humans. Furthermore, evidence is presented that certain diseases selectively comprise the functioning of M- or P-pathways (i.e., glaucoma, Alzheimer's disease, and anisometropic amblyopia), and some of the spatial and temporal deficits observed in these patients are presented within the context of the dysfunction of the M- or P-pathway.

  10. Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality

    NASA Astrophysics Data System (ADS)

    Cherukuru, Nihanth Wagmi

    Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.

  11. Learning Peri-saccadic Remapping of Receptive Field from Experience in Lateral Intraparietal Area.

    PubMed

    Wang, Xiao; Wu, Yan; Zhang, Mingsha; Wu, Si

    2017-01-01

    Our eyes move constantly at a frequency of 3-5 times per second. These movements, called saccades, induce the sweeping of visual images on the retina, yet we perceive the world as stable. It has been suggested that the brain achieves this visual stability via predictive remapping of neuronal receptive field (RF). A recent experimental study disclosed details of this remapping process in the lateral intraparietal area (LIP), that is, about the time of the saccade, the neuronal RF expands along the saccadic trajectory temporally, covering the current RF (CRF), the future RF (FRF), and the region the eye will sweep through during the saccade. A cortical wave (CW) model was also proposed, which attributes the RF remapping as a consequence of neural activity propagating in the cortex, triggered jointly by a visual stimulus and the corollary discharge (CD) signal responsible for the saccade. In this study, we investigate how this CW model is learned naturally from visual experiences at the development of the brain. We build a two-layer network, with one layer consisting of LIP neurons and the other superior colliculus (SC) neurons. Initially, neuronal connections are random and non-selective. A saccade will cause a static visual image to sweep through the retina passively, creating the effect of the visual stimulus moving in the opposite direction of the saccade. According to the spiking-time-dependent-plasticity rule, the connection path in the opposite direction of the saccade between LIP neurons and the connection path from SC to LIP are enhanced. Over many such visual experiences, the CW model is developed, which generates the peri-saccadic RF remapping in LIP as observed in the experiment.

  12. Learning Peri-saccadic Remapping of Receptive Field from Experience in Lateral Intraparietal Area

    PubMed Central

    Wang, Xiao; Wu, Yan; Zhang, Mingsha; Wu, Si

    2017-01-01

    Our eyes move constantly at a frequency of 3–5 times per second. These movements, called saccades, induce the sweeping of visual images on the retina, yet we perceive the world as stable. It has been suggested that the brain achieves this visual stability via predictive remapping of neuronal receptive field (RF). A recent experimental study disclosed details of this remapping process in the lateral intraparietal area (LIP), that is, about the time of the saccade, the neuronal RF expands along the saccadic trajectory temporally, covering the current RF (CRF), the future RF (FRF), and the region the eye will sweep through during the saccade. A cortical wave (CW) model was also proposed, which attributes the RF remapping as a consequence of neural activity propagating in the cortex, triggered jointly by a visual stimulus and the corollary discharge (CD) signal responsible for the saccade. In this study, we investigate how this CW model is learned naturally from visual experiences at the development of the brain. We build a two-layer network, with one layer consisting of LIP neurons and the other superior colliculus (SC) neurons. Initially, neuronal connections are random and non-selective. A saccade will cause a static visual image to sweep through the retina passively, creating the effect of the visual stimulus moving in the opposite direction of the saccade. According to the spiking-time-dependent-plasticity rule, the connection path in the opposite direction of the saccade between LIP neurons and the connection path from SC to LIP are enhanced. Over many such visual experiences, the CW model is developed, which generates the peri-saccadic RF remapping in LIP as observed in the experiment. PMID:29249953

  13. Different cortical projections from three subdivisions of the rat lateral posterior thalamic nucleus: a single-neuron tracing study with viral vectors.

    PubMed

    Nakamura, Hisashi; Hioki, Hiroyuki; Furuta, Takahiro; Kaneko, Takeshi

    2015-05-01

    The lateral posterior thalamic nucleus (LP) is one of the components of the extrageniculate pathway in the rat visual system, and is cytoarchitecturally divided into three subdivisions--lateral (LPl), rostromedial (LPrm), and caudomedial (LPcm) portions. To clarify the differences in the dendritic fields and axonal arborisations among the three subdivisions, we applied a single-neuron labeling technique with viral vectors to LP neurons. The proximal dendrites of LPl neurons were more numerous than those of LPrm and LPcm neurons, and LPrm neurons tended to have wider dendritic fields than LPl neurons. We then analysed the axonal arborisations of LP neurons by reconstructing the axon fibers in the cortex. The LPl, LPrm and LPcm were different from one another in terms of the projection targets--the main target cortical regions of LPl and LPrm neurons were the secondary and primary visual areas, whereas those of LPcm neurons were the postrhinal and temporal association areas. Furthermore, the principal target cortical layers of LPl neurons in the visual areas were middle layers, but that of LPrm neurons was layer 1. This indicates that LPl and LPrm neurons can be categorised into the core and matrix types of thalamic neurons, respectively, in the visual areas. In addition, LPl neurons formed multiple axonal clusters within the visual areas, whereas the fibers of LPrm neurons were widely and diffusely distributed. It is therefore presumed that these two types of neurons play different roles in visual information processing by dual thalamocortical innervation of the visual areas. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    PubMed

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  15. Sensory Temporal Processing in Adults with Early Hearing Loss

    ERIC Educational Resources Information Center

    Heming, Joanne E.; Brown, Lenora N.

    2005-01-01

    This study examined tactile and visual temporal processing in adults with early loss of hearing. The tactile task consisted of punctate stimulations that were delivered to one or both hands by a mechanical tactile stimulator. Pairs of light emitting diodes were presented on a display for visual stimulation. Responses consisted of YES or NO…

  16. Visual Temporal Processing in Dyslexia and the Magnocellular Deficit Theory: The Need for Speed?

    ERIC Educational Resources Information Center

    McLean, Gregor M. T.; Stuart, Geoffrey W.; Coltheart, Veronika; Castles, Anne

    2011-01-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore "temporal" aspects of magnocellular functioning…

  17. Superior Temporal Activation in Response to Dynamic Audio-Visual Emotional Cues

    ERIC Educational Resources Information Center

    Robins, Diana L.; Hunyadi, Elinora; Schultz, Robert T.

    2009-01-01

    Perception of emotion is critical for successful social interaction, yet the neural mechanisms underlying the perception of dynamic, audio-visual emotional cues are poorly understood. Evidence from language and sensory paradigms suggests that the superior temporal sulcus and gyrus (STS/STG) play a key role in the integration of auditory and visual…

  18. Bilateral Neuroretinitis in Cat Scratch Disease with Exudative, Obliterative Vasculitis in the Optic Disc.

    PubMed

    Tagawa, Yoshiaki; Suzuki, Yasuo; Sakaguchi, Takatoshi; Endoh, Hiroki; Yokoi, Masahiko; Kase, Manabu

    2014-01-01

    A 29-year-old fisherman exhibited optic disc oedema and peripapillary retinal detachment in the right eye, whereas in the left eye, optic atrophy and intraretinal exudates were already observed on first examination. About 6 months earlier, he noticed blurred vision of the left eye but took no medication. Visual acuity was 0.4 OD and 0.01 OS. Perimetry showed a large lower-half field defect with sparing 10° central field in the right eye and a large central scotoma in the left eye. Fluorescein angiography showed existence of arteriole or capillary nonperfusion and hyperpermeability of surrounding capillaries. Since serological examinations showed positive Bartonella immunoglobulin G (IgG) and other causes of neuroretinitis (NR) were excluded, NR in the present case was caused by cat scratch disease (CSD). Optic atrophy appeared 2 weeks after onset. Optical coherence tomography 13 weeks after onset revealed severe loss of retinal nerve fibre layer (RNFL) superior and nasal to the optic disc in both eyes and temporal in the left eye. Visual acuity of the right eye improved to 1.2 by the treatment, whereas visual field defects were persistent. CSD-NR in the present case developed abrupt appearance of optic atrophy with severe RNFL loss in the right eye, which was elicited by exudative, obliterative vasculitis in the superficial layer of the optic disc.

  19. It's about time: revisiting temporal processing deficits in dyslexia.

    PubMed

    Casini, Laurence; Pech-Georgel, Catherine; Ziegler, Johannes C

    2018-03-01

    Temporal processing in French children with dyslexia was evaluated in three tasks: a word identification task requiring implicit temporal processing, and two explicit temporal bisection tasks, one in the auditory and one in the visual modality. Normally developing children matched on chronological age and reading level served as a control group. Children with dyslexia exhibited robust deficits in temporal tasks whether they were explicit or implicit and whether they involved the auditory or the visual modality. First, they presented larger perceptual variability when performing temporal tasks, whereas they showed no such difficulties when performing the same task on a non-temporal dimension (intensity). This dissociation suggests that their difficulties were specific to temporal processing and could not be attributed to lapses of attention, reduced alertness, faulty anchoring, or overall noisy processing. In the framework of cognitive models of time perception, these data point to a dysfunction of the 'internal clock' of dyslexic children. These results are broadly compatible with the recent temporal sampling theory of dyslexia. © 2017 John Wiley & Sons Ltd.

  20. 4D electron microscopy: principles and applications.

    PubMed

    Flannigan, David J; Zewail, Ahmed H

    2012-10-16

    The transmission electron microscope (TEM) is a powerful tool enabling the visualization of atoms with length scales smaller than the Bohr radius at a factor of only 20 larger than the relativistic electron wavelength of 2.5 pm at 200 keV. The ability to visualize matter at these scales in a TEM is largely due to the efforts made in correcting for the imperfections in the lens systems which introduce aberrations and ultimately limit the achievable spatial resolution. In addition to the progress made in increasing the spatial resolution, the TEM has become an all-in-one characterization tool. Indeed, most of the properties of a material can be directly mapped in the TEM, including the composition, structure, bonding, morphology, and defects. The scope of applications spans essentially all of the physical sciences and includes biology. Until recently, however, high resolution visualization of structural changes occurring on sub-millisecond time scales was not possible. In order to reach the ultrashort temporal domain within which fundamental atomic motions take place, while simultaneously retaining high spatial resolution, an entirely new approach from that of millisecond-limited TEM cameras had to be conceived. As shown below, the approach is also different from that of nanosecond-limited TEM, whose resolution cannot offer the ultrafast regimes of dynamics. For this reason "ultrafast electron microscopy" is reserved for the field which is concerned with femtosecond to picosecond resolution capability of structural dynamics. In conventional TEMs, electrons are produced by heating a source or by applying a strong extraction field. Both methods result in the stochastic emission of electrons, with no control over temporal spacing or relative arrival time at the specimen. The timing issue can be overcome by exploiting the photoelectric effect and using pulsed lasers to generate precisely timed electron packets of ultrashort duration. The spatial and temporal resolutions achievable with short intense pulses containing a large number of electrons, however, are limited to tens of nanometers and nanoseconds, respectively. This is because Coulomb repulsion is significant in such a pulse, and the electrons spread in space and time, thus limiting the beam coherence. It is therefore not possible to image the ultrafast elementary dynamics of complex transformations. The challenge was to retain the high spatial resolution of a conventional TEM while simultaneously enabling the temporal resolution required to visualize atomic-scale motions. In this Account, we discuss the development of four-dimensional ultrafast electron microscopy (4D UEM) and summarize techniques and applications that illustrate the power of the approach. In UEM, images are obtained either stroboscopically with coherent single-electron packets or with a single electron bunch. Coulomb repulsion is absent under the single-electron condition, thus permitting imaging, diffraction, and spectroscopy, all with high spatiotemporal resolution, the atomic scale (sub-nanometer and femtosecond). The time resolution is limited only by the laser pulse duration and energy carried by the electron packets; the CCD camera has no bearing on the temporal resolution. In the regime of single pulses of electrons, the temporal resolution of picoseconds can be attained when hundreds of electrons are in the bunch. The applications given here are selected to highlight phenomena of different length and time scales, from atomic motions during structural dynamics to phase transitions and nanomechanical oscillations. We conclude with a brief discussion of emerging methods, which include scanning ultrafast electron microscopy (S-UEM), scanning transmission ultrafast electron microscopy (ST-UEM) with convergent beams, and time-resolved imaging of biological structures at ambient conditions with environmental cells.

  1. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  2. The acute effects of cocoa flavanols on temporal and spatial attention.

    PubMed

    Karabay, Aytaç; Saija, Jefta D; Field, David T; Akyürek, Elkan G

    2018-05-01

    In this study, we investigated how the acute physiological effects of cocoa flavanols might result in specific cognitive changes, in particular in temporal and spatial attention. To this end, we pre-registered and implemented a randomized, double-blind, placebo- and baseline-controlled crossover design. A sample of 48 university students participated in the study and each of them completed the experimental tasks in four conditions (baseline, placebo, low dose, and high-dose flavanol), administered in separate sessions with a 1-week washout interval. A rapid serial visual presentation task was used to test flavanol effects on temporal attention and integration, and a visual search task was similarly employed to investigate spatial attention. Results indicated that cocoa flavanols improved visual search efficiency, reflected by reduced reaction time. However, cocoa flavanols did not facilitate temporal attention nor integration, suggesting that flavanols may affect some aspects of attention, but not others. Potential underlying mechanisms are discussed.

  3. Multi-voxel patterns of visual category representation during episodic encoding are predictive of subsequent memory

    PubMed Central

    Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.

    2012-01-01

    Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190

  4. Morphable Word Clouds for Time-Varying Text Data Visualization.

    PubMed

    Chi, Ming-Te; Lin, Shih-Syun; Chen, Shiang-Yi; Lin, Chao-Hung; Lee, Tong-Yee

    2015-12-01

    A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting people's attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds over time, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.

  5. Nanosecond Time-Resolved Microscopic Gate-Modulation Imaging of Polycrystalline Organic Thin-Film Transistors

    NASA Astrophysics Data System (ADS)

    Matsuoka, Satoshi; Tsutsumi, Jun'ya; Matsui, Hiroyuki; Kamata, Toshihide; Hasegawa, Tatsuo

    2018-02-01

    We develop a time-resolved microscopic gate-modulation (μ GM ) imaging technique to investigate the temporal evolution of the channel current and accumulated charges in polycrystalline pentacene thin-film transistors (TFTs). A time resolution of as high as 50 ns is achieved by using a fast image-intensifier system that could amplify a series of instantaneous optical microscopic images acquired at various time intervals after the stepped gate bias is switched on. The differential images obtained by subtracting the gate-off image allows us to acquire a series of temporal μ GM images that clearly show the gradual propagation of both channel charges and leaked gate fields within the polycrystalline channel layers. The frontal positions for the propagations of both channel charges and leaked gate fields coincide at all the time intervals, demonstrating that the layered gate dielectric capacitors are successively transversely charged up along the direction of current propagation. The initial μ GM images also indicate that the electric field effect is originally concentrated around a limited area with a width of a few micrometers bordering the channel-electrode interface, and that the field intensity reaches a maximum after 200 ns and then decays. The time required for charge propagation over the whole channel region with a length of 100 μ m is estimated at about 900 ns, which is consistent with the measured field-effect mobility and the temporal-response model for organic TFTs. The effect of grain boundaries can be also visualized by comparison of the μ GM images for the transient and the steady states, which confirms that the potential barriers at the grain boundaries cause the transient shift in the accumulated charges or the transient accumulation of additional charges around the grain boundaries.

  6. Deconstruction of spatial integrity in visual stimulus detected by modulation of synchronized activity in cat visual cortex.

    PubMed

    Zhou, Zhiyi; Bernard, Melanie R; Bonds, A B

    2008-04-02

    Spatiotemporal relationships among contour segments can influence synchronization of neural responses in the primary visual cortex. We performed a systematic study to dissociate the impact of spatial and temporal factors in the signaling of contour integration via synchrony. In addition, we characterized the temporal evolution of this process to clarify potential underlying mechanisms. With a 10 x 10 microelectrode array, we recorded the simultaneous activity of multiple cells in the cat primary visual cortex while stimulating with drifting sine-wave gratings. We preserved temporal integrity and systematically degraded spatial integrity of the sine-wave gratings by adding spatial noise. Neural synchronization was analyzed in the time and frequency domains by conducting cross-correlation and coherence analyses. The general association between neural spike trains depends strongly on spatial integrity, with coherence in the gamma band (35-70 Hz) showing greater sensitivity to the change of spatial structure than other frequency bands. Analysis of the temporal dynamics of synchronization in both time and frequency domains suggests that spike timing synchronization is triggered nearly instantaneously by coherent structure in the stimuli, whereas frequency-specific oscillatory components develop more slowly, presumably through network interactions. Our results suggest that, whereas temporal integrity is required for the generation of synchrony, spatial integrity is critical in triggering subsequent gamma band synchronization.

  7. Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    PubMed Central

    Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.

    2011-01-01

    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562

  8. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    PubMed

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re-education programs in children presenting with deficits in social cue processing.

  9. Role of the mouse retinal photoreceptor ribbon synapse in visual motion processing for optokinetic responses.

    PubMed

    Sugita, Yuko; Araki, Fumiyuki; Chaya, Taro; Kawano, Kenji; Furukawa, Takahisa; Miura, Kenichiro

    2015-01-01

    The ribbon synapse is a specialized synaptic structure in the retinal outer plexiform layer where visual signals are transmitted from photoreceptors to the bipolar and horizontal cells. This structure is considered important in high-efficiency signal transmission; however, its role in visual signal processing is unclear. In order to understand its role in visual processing, the present study utilized Pikachurin-null mutant mice that show improper formation of the photoreceptor ribbon synapse. We examined the initial and late phases of the optokinetic responses (OKRs). The initial phase was examined by measuring the open-loop eye velocity of the OKRs to sinusoidal grating patterns of various spatial frequencies moving at various temporal frequencies for 0.5 s. The mutant mice showed significant initial OKRs with a spatiotemporal frequency tuning (spatial frequency, 0.09 ± 0.01 cycles/°; temporal frequency, 1.87 ± 0.12 Hz) that was slightly different from the wild-type mice (spatial frequency, 0.11 ± 0.01 cycles/°; temporal frequency, 1.66 ± 0.12 Hz). The late phase of the OKRs was examined by measuring the slow phase eye velocity of the optokinetic nystagmus induced by the sinusoidal gratings of various spatiotemporal frequencies moving for 30 s. We found that the optimal spatial and temporal frequencies of the mutant mice (spatial frequency, 0.11 ± 0.02 cycles/°; temporal frequency, 0.81 ± 0.24 Hz) were both lower than those in the wild-type mice (spatial frequency, 0.15 ± 0.02 cycles/°; temporal frequency, 1.93 ± 0.62 Hz). These results suggest that the ribbon synapse modulates the spatiotemporal frequency tuning of visual processing along the ON pathway by which the late phase of OKRs is mediated.

  10. Role of the Mouse Retinal Photoreceptor Ribbon Synapse in Visual Motion Processing for Optokinetic Responses

    PubMed Central

    Sugita, Yuko; Araki, Fumiyuki; Chaya, Taro; Kawano, Kenji; Furukawa, Takahisa; Miura, Kenichiro

    2015-01-01

    The ribbon synapse is a specialized synaptic structure in the retinal outer plexiform layer where visual signals are transmitted from photoreceptors to the bipolar and horizontal cells. This structure is considered important in high-efficiency signal transmission; however, its role in visual signal processing is unclear. In order to understand its role in visual processing, the present study utilized Pikachurin-null mutant mice that show improper formation of the photoreceptor ribbon synapse. We examined the initial and late phases of the optokinetic responses (OKRs). The initial phase was examined by measuring the open-loop eye velocity of the OKRs to sinusoidal grating patterns of various spatial frequencies moving at various temporal frequencies for 0.5 s. The mutant mice showed significant initial OKRs with a spatiotemporal frequency tuning (spatial frequency, 0.09 ± 0.01 cycles/°; temporal frequency, 1.87 ± 0.12 Hz) that was slightly different from the wild-type mice (spatial frequency, 0.11 ± 0.01 cycles/°; temporal frequency, 1.66 ± 0.12 Hz). The late phase of the OKRs was examined by measuring the slow phase eye velocity of the optokinetic nystagmus induced by the sinusoidal gratings of various spatiotemporal frequencies moving for 30 s. We found that the optimal spatial and temporal frequencies of the mutant mice (spatial frequency, 0.11 ± 0.02 cycles/°; temporal frequency, 0.81 ± 0.24 Hz) were both lower than those in the wild-type mice (spatial frequency, 0.15 ± 0.02 cycles/°; temporal frequency, 1.93 ± 0.62 Hz). These results suggest that the ribbon synapse modulates the spatiotemporal frequency tuning of visual processing along the ON pathway by which the late phase of OKRs is mediated. PMID:25955222

  11. Architecture and emplacement of flood basalt flow fields: case studies from the Columbia River Basalt Group, NW USA

    NASA Astrophysics Data System (ADS)

    Vye-Brown, C.; Self, S.; Barry, T. L.

    2013-03-01

    The physical features and morphologies of collections of lava bodies emplaced during single eruptions (known as flow fields) can be used to understand flood basalt emplacement mechanisms. Characteristics and internal features of lava lobes and whole flow field morphologies result from the forward propagation, radial spread, and cooling of individual lobes and are used as a tool to understand the architecture of extensive flood basalt lavas. The features of three flood basalt flow fields from the Columbia River Basalt Group are presented, including the Palouse Falls flow field, a small (8,890 km2, ˜190 km3) unit by common flood basalt proportions, and visualized in three dimensions. The architecture of the Palouse Falls flow field is compared to the complex Ginkgo and more extensive Sand Hollow flow fields to investigate the degree to which simple emplacement models represent the style, as well as the spatial and temporal developments, of flow fields. Evidence from each flow field supports emplacement by inflation as the predominant mechanism producing thick lobes. Inflation enables existing lobes to transmit lava to form new lobes, thus extending the advance and spread of lava flow fields. Minimum emplacement timescales calculated for each flow field are 19.3 years for Palouse Falls, 8.3 years for Ginkgo, and 16.9 years for Sand Hollow. Simple flow fields can be traced from vent to distal areas and an emplacement sequence visualized, but those with multiple-layered lobes present a degree of complexity that make lava pathways and emplacement sequences more difficult to identify.

  12. Attention reduces spatial uncertainty in human ventral temporal cortex.

    PubMed

    Kay, Kendrick N; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-03-02

    Ventral temporal cortex (VTC) is the latest stage of the ventral "what" visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3-5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal "where" visual pathway [6-10]. Here, we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Attention reduces spatial uncertainty in human ventral temporal cortex

    PubMed Central

    Kay, Kendrick N.; Weiner, Kevin S.; Grill-Spector, Kalanit

    2014-01-01

    SUMMARY Ventral temporal cortex (VTC) is the latest stage of the ventral ‘what’ visual pathway, which is thought to code the identity of a stimulus regardless of its position or size [1, 2]. Surprisingly, recent studies show that position information can be decoded from VTC [3–5]. However, the computational mechanisms by which spatial information is encoded in VTC are unknown. Furthermore, how attention influences spatial representations in human VTC is also unknown because the effect of attention on spatial representations has only been examined in the dorsal ‘where’ visual pathway [6–10]. Here we fill these significant gaps in knowledge using an approach that combines functional magnetic resonance imaging and sophisticated computational methods. We first develop a population receptive field (pRF) model [11, 12] of spatial responses in human VTC. Consisting of spatial summation followed by a compressive nonlinearity, this model accurately predicts responses of individual voxels to stimuli at any position and size, explains how spatial information is encoded, and reveals a functional hierarchy in VTC. We then manipulate attention and use our model to decipher the effects of attention. We find that attention to the stimulus systematically and selectively modulates responses in VTC, but not early visual areas. Locally, attention increases eccentricity, size, and gain of individual pRFs, thereby increasing position tolerance. However, globally, these effects reduce uncertainty regarding stimulus location and actually increase position sensitivity of distributed responses across VTC. These results demonstrate that attention actively shapes and enhances spatial representations in the ventral visual pathway. PMID:25702580

  14. Visual pattern recognition based on spatio-temporal patterns of retinal ganglion cells’ activities

    PubMed Central

    Jing, Wei; Liu, Wen-Zhong; Gong, Xin-Wei; Gong, Hai-Qing

    2010-01-01

    Neural information is processed based on integrated activities of relevant neurons. Concerted population activity is one of the important ways for retinal ganglion cells to efficiently organize and process visual information. In the present study, the spike activities of bullfrog retinal ganglion cells in response to three different visual patterns (checker-board, vertical gratings and horizontal gratings) were recorded using multi-electrode arrays. A measurement of subsequence distribution discrepancy (MSDD) was applied to identify the spatio-temporal patterns of retinal ganglion cells’ activities in response to different stimulation patterns. The results show that the population activity patterns were different in response to different stimulation patterns, such difference in activity pattern was consistently detectable even when visual adaptation occurred during repeated experimental trials. Therefore, the stimulus pattern can be reliably discriminated according to the spatio-temporal pattern of the neuronal activities calculated using the MSDD algorithm. PMID:21886670

  15. Bilateral vision loss due to Leber's hereditary optic neuropathy after long-term alcohol, nicotine and drug abuse.

    PubMed

    Maass, Johanna; Matthé, Egbert

    2018-04-01

    Leber's hereditary optic neuropathy is relatively rare, and no clinical pathognomonic signs exist. We present a rare case of bilateral vision loss of a patient with multiple drug abuse in the history. A 31-year-old man presented with a history of progressive, decreased vision in both eyes for 6 month. On examination, his visual acuity was hand motion in both eyes. Funduscopy demonstrated a temporal pallor of the optic disc. Goldmann visual field perimetry showed a crescent visual field in the right eye and a circular decrease to less than 50 ° in the left eye. Electroretinogram showed a scotopic b-wave amplitude reduction. Optical coherence tomographies, Heidelberg Retina tomography, visual evoked potentials, and magnetic resonance imaging with contrast as well as blood tests were normal. The patient reported to consume various kinds of drugs as well as recreational drug use and alcohol consumption since he was 16 years old. We started a hemodilution therapy, believing the patient suffered from a bilateral, toxic optic neuropathy due to his lifestyle. Laboratory results later on showed Leber's hereditary optic neuropathy. Leber's hereditary optic neuropathy is a rare disease without a typical, pathognomonic presentation. Even though the patient gave good reasons for a toxic optic neuropathy, one should never stop to test for other diseases.

  16. Fine-grained temporal coding of visually-similar categories in the ventral visual pathway and prefrontal cortex

    PubMed Central

    Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2013-01-01

    Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656

  17. Subliminal convergence of Kanji and Kana words: further evidence for functional parcellation of the posterior temporal cortex in visual word perception.

    PubMed

    Nakamura, Kimihiro; Dehaene, Stanislas; Jobert, Antoinette; Le Bihan, Denis; Kouider, Sid

    2005-06-01

    Recent evidence has suggested that the human occipitotemporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.

  18. Enlarged temporal integration window in schizophrenia indicated by the double-flash illusion.

    PubMed

    Haß, Katharina; Sinke, Christopher; Reese, Tanya; Roy, Mandy; Wiswede, Daniel; Dillo, Wolfgang; Oranje, Bob; Szycik, Gregor R

    2017-03-01

    In the present study we were interested in the processing of audio-visual integration in schizophrenia compared to healthy controls. The amount of sound-induced double-flash illusions served as an indicator of audio-visual integration. We expected an altered integration as well as a different window of temporal integration for patients. Fifteen schizophrenia patients and 15 healthy volunteers matched for age and gender were included in this study. We used stimuli with eight different temporal delays (stimulus onset asynchronys (SOAs) 25, 50, 75, 100, 125, 150, 200 and 300 ms) to induce a double-flash illusion. Group differences and the widths of temporal integration windows were calculated on percentages of reported double-flash illusions. Patients showed significantly more illusions (ca. 36-44% vs. 9-16% in control subjects) for SOAs 150-300. The temporal integration window for control participants went from SOAs 25 to 200 whereas for patients integration was found across all included temporal delays. We found no significant relationship between the amount of illusions and either illness severity, chlorpromazine equivalent doses or duration of illness in patients. Our results are interpreted in favour of an enlarged temporal integration window for audio-visual stimuli in schizophrenia patients, which is consistent with previous research.

  19. Iconic memory and parietofrontal network: fMRI study using temporal integration.

    PubMed

    Saneyoshi, Ayako; Niimi, Ryosuke; Suetsugu, Tomoko; Kaminaga, Tatsuro; Yokosawa, Kazuhiko

    2011-08-03

    We investigated the neural basis of iconic memory using functional magnetic resonance imaging. The parietofrontal network of selective attention is reportedly relevant to readout from iconic memory. We adopted a temporal integration task that requires iconic memory but not selective attention. The results showed that the task activated the parietofrontal network, confirming that the network is involved in readout from iconic memory. We further tested a condition in which temporal integration was performed by visual short-term memory but not by iconic memory. However, no brain region revealed higher activation for temporal integration by iconic memory than for temporal integration by visual short-term memory. This result suggested that there is no localized brain region specialized for iconic memory per se.

  20. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect.

    PubMed

    Pratt, Hillel; Bleich, Naomi; Mittelman, Nomi

    2015-11-01

    Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.

  1. Binocular Neurons in Parastriate Cortex: Interocular ‘Matching’ of Receptive Field Properties, Eye Dominance and Strength of Silent Suppression

    PubMed Central

    Wang, Chun; Dreher, Bogdan

    2014-01-01

    Spike-responses of single binocular neurons were recorded from a distinct part of primary visual cortex, the parastriate cortex (cytoarchitectonic area 18) of anaesthetized and immobilized domestic cats. Functional identification of neurons was based on the ratios of phase-variant (F1) component to the mean firing rate (F0) of their spike-responses to optimized (orientation, direction, spatial and temporal frequencies and size) sine-wave-luminance-modulated drifting grating patches presented separately via each eye. In over 95% of neurons, the interocular differences in the phase-sensitivities (differences in F1/F0 spike-response ratios) were small (≤0.3) and in over 80% of neurons, the interocular differences in preferred orientations were ≤10°. The interocular correlations of the direction selectivity indices and optimal spatial frequencies, like those of the phase sensitivies and optimal orientations, were also strong (coefficients of correlation r ≥0.7005). By contrast, the interocular correlations of the optimal temporal frequencies, the diameters of summation areas of the excitatory responses and suppression indices were weak (coefficients of correlation r ≤0.4585). In cells with high eye dominance indices (HEDI cells), the mean magnitudes of suppressions evoked by stimulation of silent, extra-classical receptive fields via the non-dominant eyes, were significantly greater than those when the stimuli were presented via the dominant eyes. We argue that the well documented ‘eye-origin specific’ segregation of the lateral geniculate inputs underpinning distinct eye dominance columns in primary visual cortices of mammals with frontally positioned eyes (distinct eye dominance columns), combined with significant interocular differences in the strength of silent suppressive fields, putatively contribute to binocular stereoscopic vision. PMID:24927276

  2. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  3. Modulation of V1 Spike Response by Temporal Interval of Spatiotemporal Stimulus Sequence

    PubMed Central

    Kim, Taekjun; Kim, HyungGoo R.; Kim, Kayeon; Lee, Choongkil

    2012-01-01

    The spike activity of single neurons of the primary visual cortex (V1) becomes more selective and reliable in response to wide-field natural scenes compared to smaller stimuli confined to the classical receptive field (RF). However, it is largely unknown what aspects of natural scenes increase the selectivity of V1 neurons. One hypothesis is that modulation by surround interaction is highly sensitive to small changes in spatiotemporal aspects of RF surround. Such a fine-tuned modulation would enable single neurons to hold information about spatiotemporal sequences of oriented stimuli, which extends the role of V1 neurons as a simple spatiotemporal filter confined to the RF. In the current study, we examined the hypothesis in the V1 of awake behaving monkeys, by testing whether the spike response of single V1 neurons is modulated by temporal interval of spatiotemporal stimulus sequence encompassing inside and outside the RF. We used two identical Gabor stimuli that were sequentially presented with a variable stimulus onset asynchrony (SOA): the preceding one (S1) outside the RF and the following one (S2) in the RF. This stimulus configuration enabled us to examine the spatiotemporal selectivity of response modulation from a focal surround region. Although S1 alone did not evoke spike responses, visual response to S2 was modulated for SOA in the range of tens of milliseconds. These results suggest that V1 neurons participate in processing spatiotemporal sequences of oriented stimuli extending outside the RF. PMID:23091631

  4. Learning temporal context shapes prestimulus alpha oscillations and improves visual discrimination performance.

    PubMed

    Toosi, Tahereh; K Tousi, Ehsan; Esteky, Hossein

    2017-08-01

    Time is an inseparable component of every physical event that we perceive, yet it is not clear how the brain processes time or how the neuronal representation of time affects our perception of events. Here we asked subjects to perform a visual discrimination task while we changed the temporal context in which the stimuli were presented. We collected electroencephalography (EEG) signals in two temporal contexts. In predictable blocks stimuli were presented after a constant delay relative to a visual cue, and in unpredictable blocks stimuli were presented after variable delays relative to the visual cue. Four subsecond delays of 83, 150, 400, and 800 ms were used in the predictable and unpredictable blocks. We observed that predictability modulated the power of prestimulus alpha oscillations in the parieto-occipital sites: alpha power increased in the 300-ms window before stimulus onset in the predictable blocks compared with the unpredictable blocks. This modulation only occurred in the longest delay period, 800 ms, in which predictability also improved the behavioral performance of the subjects. Moreover, learning the temporal context shaped the prestimulus alpha power: modulation of prestimulus alpha power grew during the predictable block and correlated with performance enhancement. These results suggest that the brain is able to learn the subsecond temporal context of stimuli and use this to enhance sensory processing. Furthermore, the neural correlate of this temporal prediction is reflected in the alpha oscillations. NEW & NOTEWORTHY It is not well understood how the uncertainty in the timing of an external event affects its processing, particularly at subsecond scales. Here we demonstrate how a predictable timing scheme improves visual processing. We found that learning the predictable scheme gradually shaped the prestimulus alpha power. These findings indicate that the human brain is able to extract implicit subsecond patterns in the temporal context of events. Copyright © 2017 the American Physiological Society.

  5. Surface nanobubble nucleation dynamics during water-ethanol exchange

    NASA Astrophysics Data System (ADS)

    Chan, Chon U.; Ohl, Claus-Dieter

    2015-11-01

    Water-ethanol exchange has been a promising nucleation method for surface attached nanobubbles since their discovery. In this process, water and ethanol displace each other sequentially on a substrate. As the gas solubility is 36 times higher in ethanol than water, it was suggested that the exchange process leads to transient supersaturation and is responsible for the nanobubble nucleation. In this work, we visualize the nucleation dynamics by controllably mixing water and ethanol. It depicts the temporal evolution of the conventional exchange in a single field of view, detailing the conditions for surface nanobubble nucleation and the flow field that influences their spatial organization. This technique can also pattern surface nanobubbles with variable size distribution.

  6. Mapping paddy rice distribution using multi-temporal Landsat imagery in the Sanjiang Plain, northeast China

    PubMed Central

    XIAO, Xiangming; DONG, Jinwei; QIN, Yuanwei; WANG, Zongming

    2016-01-01

    Information of paddy rice distribution is essential for food production and methane emission calculation. Phenology-based algorithms have been utilized in the mapping of paddy rice fields by identifying the unique flooding and seedling transplanting phases using multi-temporal moderate resolution (500 m to 1 km) images. In this study, we developed simple algorithms to identify paddy rice at a fine resolution at the regional scale using multi-temporal Landsat imagery. Sixteen Landsat images from 2010–2012 were used to generate the 30 m paddy rice map in the Sanjiang Plain, northeast China—one of the major paddy rice cultivation regions in China. Three vegetation indices, Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Land Surface Water Index (LSWI), were used to identify rice fields during the flooding/transplanting and ripening phases. The user and producer accuracies of paddy rice on the resultant Landsat-based paddy rice map were 90% and 94%, respectively. The Landsat-based paddy rice map was an improvement over the paddy rice layer on the National Land Cover Dataset, which was generated through visual interpretation and digitalization on the fine-resolution images. The agricultural census data substantially underreported paddy rice area, raising serious concern about its use for studies on food security. PMID:27695637

  7. Photon gating in four-dimensional ultrafast electron microscopy.

    PubMed

    Hassan, Mohammed T; Liu, Haihua; Baskin, John Spencer; Zewail, Ahmed H

    2015-10-20

    Ultrafast electron microscopy (UEM) is a pivotal tool for imaging of nanoscale structural dynamics with subparticle resolution on the time scale of atomic motion. Photon-induced near-field electron microscopy (PINEM), a key UEM technique, involves the detection of electrons that have gained energy from a femtosecond optical pulse via photon-electron coupling on nanostructures. PINEM has been applied in various fields of study, from materials science to biological imaging, exploiting the unique spatial, energy, and temporal characteristics of the PINEM electrons gained by interaction with a "single" light pulse. The further potential of photon-gated PINEM electrons in probing ultrafast dynamics of matter and the optical gating of electrons by invoking a "second" optical pulse has previously been proposed and examined theoretically in our group. Here, we experimentally demonstrate this photon-gating technique, and, through diffraction, visualize the phase transition dynamics in vanadium dioxide nanoparticles. With optical gating of PINEM electrons, imaging temporal resolution was improved by a factor of 3 or better, being limited only by the optical pulse widths. This work enables the combination of the high spatial resolution of electron microscopy and the ultrafast temporal response of the optical pulses, which provides a promising approach to attain the resolution of few femtoseconds and attoseconds in UEM.

  8. Photon gating in four-dimensional ultrafast electron microscopy

    PubMed Central

    Hassan, Mohammed T.; Liu, Haihua; Baskin, John Spencer; Zewail, Ahmed H.

    2015-01-01

    Ultrafast electron microscopy (UEM) is a pivotal tool for imaging of nanoscale structural dynamics with subparticle resolution on the time scale of atomic motion. Photon-induced near-field electron microscopy (PINEM), a key UEM technique, involves the detection of electrons that have gained energy from a femtosecond optical pulse via photon–electron coupling on nanostructures. PINEM has been applied in various fields of study, from materials science to biological imaging, exploiting the unique spatial, energy, and temporal characteristics of the PINEM electrons gained by interaction with a “single” light pulse. The further potential of photon-gated PINEM electrons in probing ultrafast dynamics of matter and the optical gating of electrons by invoking a “second” optical pulse has previously been proposed and examined theoretically in our group. Here, we experimentally demonstrate this photon-gating technique, and, through diffraction, visualize the phase transition dynamics in vanadium dioxide nanoparticles. With optical gating of PINEM electrons, imaging temporal resolution was improved by a factor of 3 or better, being limited only by the optical pulse widths. This work enables the combination of the high spatial resolution of electron microscopy and the ultrafast temporal response of the optical pulses, which provides a promising approach to attain the resolution of few femtoseconds and attoseconds in UEM. PMID:26438835

  9. Dual processing of visual rotation for bipedal stance control.

    PubMed

    Day, Brian L; Muller, Timothy; Offord, Joanna; Di Giulio, Irene

    2016-10-01

    When standing, the gain of the body-movement response to a sinusoidally moving visual scene has been shown to get smaller with faster stimuli, possibly through changes in the apportioning of visual flow to self-motion or environment motion. We investigated whether visual-flow speed similarly influences the postural response to a discrete, unidirectional rotation of the visual scene in the frontal plane. Contrary to expectation, the evoked postural response consisted of two sequential components with opposite relationships to visual motion speed. With faster visual rotation the early component became smaller, not through a change in gain but by changes in its temporal structure, while the later component grew larger. We propose that the early component arises from the balance control system minimising apparent self-motion, while the later component stems from the postural system realigning the body with gravity. The source of visual motion is inherently ambiguous such that movement of objects in the environment can evoke self-motion illusions and postural adjustments. Theoretically, the brain can mitigate this problem by combining visual signals with other types of information. A Bayesian model that achieves this was previously proposed and predicts a decreasing gain of postural response with increasing visual motion speed. Here we test this prediction for discrete, unidirectional, full-field visual rotations in the frontal plane of standing subjects. The speed (0.75-48 deg s(-1) ) and direction of visual rotation was pseudo-randomly varied and mediolateral responses were measured from displacements of the trunk and horizontal ground reaction forces. The behaviour evoked by this visual rotation was more complex than has hitherto been reported, consisting broadly of two consecutive components with respective latencies of ∼190 ms and >0.7 s. Both components were sensitive to visual rotation speed, but with diametrically opposite relationships. Thus, the early component decreased with faster visual rotation, while the later component increased. Furthermore, the decrease in size of the early component was not achieved by a simple attenuation of gain, but by a change in its temporal structure. We conclude that the two components represent expressions of different motor functions, both pertinent to the control of bipedal stance. We propose that the early response stems from the balance control system attempting to minimise unintended body motion, while the later response arises from the postural control system attempting to align the body with gravity. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  10. Pitting temporal against spatial integration in schizophrenic patients.

    PubMed

    Herzog, Michael H; Brand, Andreas

    2009-06-30

    Schizophrenic patients show strong impairments in visual backward masking possibly caused by deficits on the early stages of visual processing. The underlying aberrant mechanisms are not clearly understood. Spatial as well as temporal processing deficits have been proposed. Here, by combining a spatial with a temporal integration paradigm, we show further evidence that temporal but not spatial processing is impaired in schizophrenic patients. Eleven schizophrenic patients and ten healthy controls were presented with sequences composed of Vernier stimuli. Patients needed significantly longer presentation times for sequentially presented Vernier stimuli to reach a performance level comparable to that of healthy controls (temporal integration deficit). When we added spatial contextual elements to some of the Vernier stimuli, performance changed in a complex but comparable manner in patients and controls (intact spatial integration). Hence, temporal but not spatial processing seems to be deficient in schizophrenia.

  11. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet

    PubMed Central

    Rolls, Edmund T.

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777

  12. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    PubMed

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  14. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  15. Auditory/visual Duration Bisection in Patients with Left or Right Medial-Temporal Lobe Resection

    ERIC Educational Resources Information Center

    Melgire, Manuela; Ragot, Richard; Samson, Severine; Penney, Trevor B.; Meck, Warren H.; Pouthas, Viviane

    2005-01-01

    Patients with unilateral (left or right) medial temporal lobe lesions and normal control (NC) volunteers participated in two experiments, both using a duration bisection procedure. Experiment 1 assessed discrimination of auditory and visual signal durations ranging from 2 to 8 s, in the same test session. Patients and NC participants judged…

  16. Visual Object Detection, Categorization, and Identification Tasks Are Associated with Different Time Courses and Sensitivities

    ERIC Educational Resources Information Center

    de la Rosa, Stephan; Choudhery, Rabia N.; Chatziastros, Astros

    2011-01-01

    Recent evidence suggests that the recognition of an object's presence and its explicit recognition are temporally closely related. Here we re-examined the time course (using a fine and a coarse temporal resolution) and the sensitivity of three possible component processes of visual object recognition. In particular, participants saw briefly…

  17. Out of sight but not out of mind: the neurophysiology of iconic memory in the superior temporal sulcus.

    PubMed

    Keysers, C; Xiao, D-K; Foldiak, P; Perrett, D I

    2005-05-01

    Iconic memory, the short-lasting visual memory of a briefly flashed stimulus, is an important component of most models of visual perception. Here we investigate what physiological mechanisms underlie this capacity by showing rapid serial visual presentation (RSVP) sequences with and without interstimulus gaps to human observers and macaque monkeys. For gaps of up to 93 ms between consecutive images, human observers and neurones in the temporal cortex of macaque monkeys were found to continue processing a stimulus as if it was still present on the screen. The continued firing of neurones in temporal cortex may therefore underlie iconic memory. Based on these findings, a neurophysiological vision of iconic memory is presented.

  18. Visual and Experiential Learning Opportunities through Geospatial Data

    NASA Astrophysics Data System (ADS)

    Gardiner, N.; Bulletins, S.

    2007-12-01

    Global observation data from satellites are essential for both research and education about Earth's climate because they help convey the temporal and spatial scales inherent to the subject, which are beyond most people's experience. Experts in the development of visualizations using spatial data distinguish the process of learning through data exploration from the process of learning by absorbing a story told from beginning to end. The former requires the viewer to absorb complex spatial and temporal dynamics inherent to visualized data and therefore is a process best undertaken by those familiar with the data and processes represented. The latter requires that the viewer understand the intended presentation of concepts, so story telling can be employed to educate viewers with varying backgrounds and familiarity with a given subject. Three examples of climate science education, drawn from the current science program Science Bulletins (American Museum of Natural History, New York, USA), demonstrate the power of visualized global earth observations for climate science education. The first example seeks to explain the potential for sea level rise on a global basis. A short feature film includes the visualized, projected effects of sea level rise at local to global scales; this visualization complements laboratory and field observations of glacier retreat and paleoclimatic reconstructions based on fossilized coral reef analysis, each of which is also depicted in the film. The narrative structure keeps learners focused on discrete scientific concepts. The second example utilizes half-hourly cloud observations to demonstrate weather and climate patterns to audiences on a global basis. Here, the scientific messages are qualitatively simpler, but the viewer must deduce his own complex visual understanding of the visualized data. Finally, we present plans for distributing climate science education products via mediated public events whereby participants learn from climate and geovisualization experts working collaboratively. This last example provides an opportunity for deep exploration of patterns and processes in a live setting and makes full use of complementary talents, including computer science, internet-enabled data sharing, remote sensing image processing, and meteorology. These innovative examples from informal educators serve as powerful pedagogical models to consider for the classroom of the future.

  19. Occipital cortical thickness in very low birth weight born adolescents predicts altered neural specialization of visual semantic category related neural networks.

    PubMed

    Klaver, Peter; Latal, Beatrice; Martin, Ernst

    2015-01-01

    Very low birth weight (VLBW) premature born infants have a high risk to develop visual perceptual and learning deficits as well as widespread functional and structural brain abnormalities during infancy and childhood. Whether and how prematurity alters neural specialization within visual neural networks is still unknown. We used functional and structural brain imaging to examine the visual semantic system of VLBW born (<1250 g, gestational age 25-32 weeks) adolescents (13-15 years, n = 11, 3 males) and matched term born control participants (13-15 years, n = 11, 3 males). Neurocognitive assessment revealed no group differences except for lower scores on an adaptive visuomotor integration test. All adolescents were scanned while viewing pictures of animals and tools and scrambled versions of these pictures. Both groups demonstrated animal and tool category related neural networks. Term born adolescents showed tool category related neural activity, i.e. tool pictures elicited more activity than animal pictures, in temporal and parietal brain areas. Animal category related activity was found in the occipital, temporal and frontal cortex. VLBW born adolescents showed reduced tool category related activity in the dorsal visual stream compared with controls, specifically the left anterior intraparietal sulcus, and enhanced animal category related activity in the left middle occipital gyrus and right lingual gyrus. Lower birth weight of VLBW adolescents correlated with larger thickness of the pericalcarine gyrus in the occipital cortex and smaller surface area of the superior temporal gyrus in the lateral temporal cortex. Moreover, larger thickness of the pericalcarine gyrus and smaller surface area of the superior temporal gyrus correlated with reduced tool category related activity in the parietal cortex. Together, our data suggest that very low birth weight predicts alterations of higher order visual semantic networks, particularly in the dorsal stream. The differences in neural specialization may be associated with aberrant cortical development of areas in the visual system that develop early in childhood. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Audiovisual speech integration in the superior temporal region is dysfunctional in dyslexia.

    PubMed

    Ye, Zheng; Rüsseler, Jascha; Gerth, Ivonne; Münte, Thomas F

    2017-07-25

    Dyslexia is an impairment of reading and spelling that affects both children and adults even after many years of schooling. Dyslexic readers have deficits in the integration of auditory and visual inputs but the neural mechanisms of the deficits are still unclear. This fMRI study examined the neural processing of auditorily presented German numbers 0-9 and videos of lip movements of a German native speaker voicing numbers 0-9 in unimodal (auditory or visual) and bimodal (always congruent) conditions in dyslexic readers and their matched fluent readers. We confirmed results of previous studies that the superior temporal gyrus/sulcus plays a critical role in audiovisual speech integration: fluent readers showed greater superior temporal activations for combined audiovisual stimuli than auditory-/visual-only stimuli. Importantly, such an enhancement effect was absent in dyslexic readers. Moreover, the auditory network (bilateral superior temporal regions plus medial PFC) was dynamically modulated during audiovisual integration in fluent, but not in dyslexic readers. These results suggest that superior temporal dysfunction may underly poor audiovisual speech integration in readers with dyslexia. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Navigational strategies underlying phototaxis in larval zebrafish

    PubMed Central

    Chen, Xiuye; Engert, Florian

    2014-01-01

    Understanding how the brain transforms sensory input into complex behavior is a fundamental question in systems neuroscience. Using larval zebrafish, we study the temporal component of phototaxis, which is defined as orientation decisions based on comparisons of light intensity at successive moments in time. We developed a novel “Virtual Circle” assay where whole-field illumination is abruptly turned off when the fish swims out of a virtually defined circular border, and turned on again when it returns into the circle. The animal receives no direct spatial cues and experiences only whole-field temporal light changes. Remarkably, the fish spends most of its time within the invisible virtual border. Behavioral analyses of swim bouts in relation to light transitions were used to develop four discrete temporal algorithms that transform the binary visual input (uniform light/uniform darkness) into the observed spatial behavior. In these algorithms, the turning angle is dependent on the behavioral history immediately preceding individual turning events. Computer simulations show that the algorithms recapture most of the swim statistics of real fish. We discovered that turning properties in larval zebrafish are distinctly modulated by temporal step functions in light intensity in combination with the specific motor history preceding these turns. Several aspects of the behavior suggest memory usage of up to 10 swim bouts (~10 sec). Thus, we show that a complex behavior like spatial navigation can emerge from a small number of relatively simple behavioral algorithms. PMID:24723859

  2. Multichannel optical mapping: investigation of depth information

    NASA Astrophysics Data System (ADS)

    Sase, Ichiro; Eda, Hideo; Seiyama, Akitoshi; Tanabe, Hiroki C.; Takatsuki, Akira; Yanagida, Toshio

    2001-06-01

    Near infrared (NIR) light has become a powerful tool for non-invasive imaging of human brain activity. Many systems have been developed to capture the changes in regional brain blood flow and hemoglobin oxygenation, which occur in the human cortex in response to neural activity. We have developed a multi-channel reflectance imaging system, which can be used as a `mapping device' and also as a `multi-channel spectrophotometer'. In the present study, we visualized changes in the hemodynamics of the human occipital region in multiple ways. (1) Stimulating left and right primary visual cortex independently by showing sector shaped checkerboards sequentially over the contralateral visual field, resulted in corresponding changes in the hemodynamics observed by `mapping' measurement. (2) Simultaneous measurement of functional-MRI and NIR (changes in total hemoglobin) during visual stimulation showed good spatial and temporal correlation with each other. (3) Placing multiple channels densely over the occipital region demonstrated spatial patterns more precisely, and depth information was also acquired by placing each pair of illumination and detection fibers at various distances. These results indicate that optical method can provide data for 3D analysis of human brain functions.

  3. The role of temporal synchrony as a binding cue for visual persistence in early visual areas: an fMRI study.

    PubMed

    Wong, Yvonne J; Aldcroft, Adrian J; Large, Mary-Ellen; Culham, Jody C; Vilis, Tutis

    2009-12-01

    We examined the role of temporal synchrony-the simultaneous appearance of visual features-in the perceptual and neural processes underlying object persistence. When a binding cue (such as color or motion) momentarily exposes an object from a background of similar elements, viewers remain aware of the object for several seconds before it perceptually fades into the background, a phenomenon known as object persistence. We showed that persistence from temporal stimulus synchrony, like that arising from motion and color, is associated with activation in the lateral occipital (LO) area, as measured by functional magnetic resonance imaging. We also compared the distribution of occipital cortex activity related to persistence to that of iconic visual memory. Although activation related to iconic memory was largely confined to LO, activation related to object persistence was present across V1 to LO, peaking in V3 and V4, regardless of the binding cue (temporal synchrony, motion, or color). Although persistence from motion cues was not associated with higher activation in the MT+ motion complex, persistence from color cues was associated with increased activation in V4. Taken together, these results demonstrate that although persistence is a form of visual memory, it relies on neural mechanisms different from those of iconic memory. That is, persistence not only activates LO in a cue-independent manner, it also recruits visual areas that may be necessary to maintain binding between object elements.

  4. Modulation of Temporal Precision in Thalamic Population Responses to Natural Visual Stimuli

    PubMed Central

    Desbordes, Gaëlle; Jin, Jianzhong; Alonso, Jose-Manuel; Stanley, Garrett B.

    2010-01-01

    Natural visual stimuli have highly structured spatial and temporal properties which influence the way visual information is encoded in the visual pathway. In response to natural scene stimuli, neurons in the lateral geniculate nucleus (LGN) are temporally precise – on a time scale of 10–25 ms – both within single cells and across cells within a population. This time scale, established by non stimulus-driven elements of neuronal firing, is significantly shorter than that of natural scenes, yet is critical for the neural representation of the spatial and temporal structure of the scene. Here, a generalized linear model (GLM) that combines stimulus-driven elements with spike-history dependence associated with intrinsic cellular dynamics is shown to predict the fine timing precision of LGN responses to natural scene stimuli, the corresponding correlation structure across nearby neurons in the population, and the continuous modulation of spike timing precision and latency across neurons. A single model captured the experimentally observed neural response, across different levels of contrasts and different classes of visual stimuli, through interactions between the stimulus correlation structure and the nonlinearity in spike generation and spike history dependence. Given the sensitivity of the thalamocortical synapse to closely timed spikes and the importance of fine timing precision for the faithful representation of natural scenes, the modulation of thalamic population timing over these time scales is likely important for cortical representations of the dynamic natural visual environment. PMID:21151356

  5. Multiple pathways carry signals from short-wavelength-sensitive ('blue') cones to the middle temporal area of the macaque.

    PubMed

    Jayakumar, Jaikishan; Roy, Sujata; Dreher, Bogdan; Martin, Paul R; Vidyasagar, Trichur R

    2013-01-01

    We recorded spike activity of single neurones in the middle temporal visual cortical area (MT or V5) of anaesthetised macaque monkeys. We used flashing, stationary spatially circumscribed, cone-isolating and luminance-modulated stimuli of uniform fields to assess the effects of signals originating from the long-, medium- or short- (S) wavelength-sensitive cone classes. Nearly half (41/86) of the tested MT neurones responded reliably to S-cone-isolating stimuli. Response amplitude in the majority of the neurones tested further (19/28) was significantly reduced, though not always completely abolished, during reversible inactivation of visuotopically corresponding regions of the ipsilateral primary visual cortex (striate cortex, area V1). Thus, the present data indicate that signals originating in S-cones reach area MT, either via V1 or via a pathway that does not go through area V1. We did not find a significant difference between the mean latencies of spike responses of MT neurones to signals that bypass V1 and those that do not; the considerable overlap we observed precludes the use of spike-response latency as a criterion to define the routes through which the signals reach MT.

  6. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    PubMed

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  8. Ultra-high-field (9.4 T) MRI Analysis of Contrast Agent Transport Across the Blood-Perilymph Barrier and Intrastrial Fluid-Blood Barrier in the Mouse Inner Ear.

    PubMed

    Counter, S Allen; Nikkhou-Aski, Sahar; Damberg, Peter; Berglin, Cecilia Engmér; Laurell, Göran

    2017-08-01

    Effective paramagnetic contrast agent for the penetration of the perilymphatic spaces of the scala tympani, scala vestibuli, and scala media of the mouse inner ear can be determined using intravenous injection of various gadolinium (Gd) complexes and ultra-high-field magnetic resonance imaging (MRI) at 9.4 Tesla. A number of contrast agents have been explored in experimental high-field MRI to determine the most effective Gd complex for ideal signal-to-noise ratio and maximal visualization of the in vivo mammalian inner ear in analyzing the temporal and spatial parameters involved in drug penetration of the blood-perilymph barrier and intrastrial fluid-blood barrier in the mouse model using MRI. Gadoteric acid (Dotarem), Gadobutrol (Gadovist), Gadodiamide (Omniscan), Gadopent acid (Magnevist), and Mangafodipir (Teslascan) were administered intravenously using the tail vein of 60 Balb/C mice. High-resolution T1 images of drug penetration were acquired with a horizontal 9.4 T Agilent magnet after intravenously injection. Signal intensity was used as a metric of temporal and spatial parameters of drug delivery and penetration of the perilymphatic and endolymphatic spaces. ANOVA analysis of the area under the curve of intensity enhancement in perilymph revealed a significant difference (p < 0.05) in the scalae uptake using different contrast agents (F (3,25) = 3.54, p = 0.029). The Gadoteric acid complex Dotarem was found to be the most effective Gd compound in terms of rapid, morphological enhancement for analysis of the temporal, and spatial distribution in the perilymphatic space of the inner ear. Gadoteric acid (Dotarem) demonstrated efficacy as a contrast agent for enhanced visualization of the perilymphatic spaces of the inner ear labyrinthine in the mouse, including the scala tympani and scala vestibuli of the cochlea, and the semicircular canals of the vestibular apparatus. These findings may inform the clinical application of Gd compounds in patients with inner ear fluid disorders and vertigo.

  9. Elevated audiovisual temporal interaction in patients with migraine without aura

    PubMed Central

    2014-01-01

    Background Photophobia and phonophobia are the most prominent symptoms in patients with migraine without aura. Hypersensitivity to visual stimuli can lead to greater hypersensitivity to auditory stimuli, which suggests that the interaction between visual and auditory stimuli may play an important role in the pathogenesis of migraine. However, audiovisual temporal interactions in migraine have not been well studied. Therefore, our aim was to examine auditory and visual interactions in migraine. Methods In this study, visual, auditory, and audiovisual stimuli with different temporal intervals between the visual and auditory stimuli were randomly presented to the left or right hemispace. During this time, the participants were asked to respond promptly to target stimuli. We used cumulative distribution functions to analyze the response times as a measure of audiovisual integration. Results Our results showed that audiovisual integration was significantly elevated in the migraineurs compared with the normal controls (p < 0.05); however, audiovisual suppression was weaker in the migraineurs compared with the normal controls (p < 0.05). Conclusions Our findings further objectively support the notion that migraineurs without aura are hypersensitive to external visual and auditory stimuli. Our study offers a new quantitative and objective method to evaluate hypersensitivity to audio-visual stimuli in patients with migraine. PMID:24961903

  10. Lateralized Temporal Order Judgement in Dyslexia

    ERIC Educational Resources Information Center

    Liddle, Elizabeth B.; Jackson, Georgina M.; Rorden, Chris; Jackson, Stephen R.

    2009-01-01

    Temporal and spatial attentional deficits in dyslexia were investigated using a lateralized visual temporal order judgment (TOJ) paradigm that allowed both sensitivity to temporal order and spatial attentional bias to be measured. Findings indicate that adult participants with a positive screen for dyslexia were significantly less sensitive to the…

  11. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  12. Differential temporal dynamics during visual imagery and perception.

    PubMed

    Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj

    2018-05-29

    Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.

  13. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    PubMed Central

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  14. Linguistic processing in visual and modality-nonspecific brain areas: PET recordings during selective attention.

    PubMed

    Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto

    2004-07-01

    Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.

  15. The edge of awareness: Mask spatial density, but not color, determines optimal temporal frequency for continuous flash suppression.

    PubMed

    Drewes, Jan; Zhu, Weina; Melcher, David

    2018-01-01

    The study of how visual processing functions in the absence of visual awareness has become a major research interest in the vision-science community. One of the main sources of evidence that stimuli that do not reach conscious awareness-and are thus "invisible"-are still processed to some degree by the visual system comes from studies using continuous flash suppression (CFS). Why and how CFS works may provide more general insight into how stimuli access awareness. As spatial and temporal properties of stimuli are major determinants of visual perception, we hypothesized that these properties of the CFS masks would be of significant importance to the achieved suppression depth. In previous studies however, the spatial and temporal properties of the masks themselves have received little study, and masking parameters vary widely across studies, making a metacomparison difficult. To investigate the factors that determine the effectiveness of CFS, we varied both the temporal frequency and the spatial density of Mondrian-style masks. We consistently found the longest suppression duration for a mask temporal frequency of around 6 Hz. In trials using masks with reduced spatial density, suppression was weaker and frequency tuning was less precise. In contrast, removing color reduced mask effectiveness but did not change the pattern of suppression strength as a function of frequency. Overall, this pattern of results stresses the importance of CFS mask parameters and is consistent with the idea that CFS works by disrupting the spatiotemporal mechanisms that underlie conscious access to visual input.

  16. Peripheral defocus does not necessarily affect central refractive development.

    PubMed

    Schippert, Ruth; Schaeffel, Frank

    2006-10-01

    Recent experiments in monkeys suggest that deprivation, imposed only in the periphery of the visual field, can induce foveal myopia. This raises the hypothesis that peripheral refractive errors imposed by the spectacle lens correction could influence foveal refractive development also in humans. We have tested this hypothesis in chicks. Chicks wore either full field spectacle lenses (+6.9 D/-7 D), or lenses with central holes of 4, 6, or 8mm diameter, for 4 days (n=6 for each group). Refractions were measured in the central visual field, and at -45 degrees (temporal) and +45 degrees (nasal), and axial lengths were measured by A-scan ultrasonography. As previously described, full field lenses were largely compensated within 4 days (refraction changes with positive lenses: +4.69+/-1.73 D, negative lenses: -5.98+/-1.78 D, both p<0.001, Dunnett's test, to untreated controls). With holes in the center of the lenses, the central refraction remained emmetropic and there was not even a trend of a shift in refraction (all groups: p>0.5, Dunnetts test). At +/-45 degrees , the lenses were partially compensated despite the 4/6/8mm central holes; positive lenses: +2.63 / +1.44 / +0.43 D, negative lenses: -2.57 / -1.06 / +0.06 D. There is extensive local compensation of imposed refractive errors in chickens. For the tested hole sizes, peripherally imposed defocus did not influence central refractive development. To alter central refractive development, the unobstructed part in the central visual field may have to be quite small (hole sizes smaller than 4mm, with the lenses at a vertex distance of 2-3mm).

  17. Perceptual training yields rapid improvements in visually impaired youth

    PubMed Central

    Nyquist, Jeffrey B.; Lappin, Joseph S.; Zhang, Ruyuan; Tadin, Duje

    2016-01-01

    Visual function demands coordinated responses to information over a wide field of view, involving both central and peripheral vision. Visually impaired individuals often seem to underutilize peripheral vision, even in absence of obvious peripheral deficits. Motivated by perceptual training studies with typically sighted adults, we examined the effectiveness of perceptual training in improving peripheral perception of visually impaired youth. Here, we evaluated the effectiveness of three training regimens: (1) an action video game, (2) a psychophysical task that combined attentional tracking with a spatially and temporally unpredictable motion discrimination task, and (3) a control video game. Training with both the action video game and modified attentional tracking yielded improvements in visual performance. Training effects were generally larger in the far periphery and appear to be stable 12 months after training. These results indicate that peripheral perception might be under-utilized by visually impaired youth and that this underutilization can be improved with only ~8 hours of perceptual training. Moreover, the similarity of improvements following attentional tracking and action video-game training suggest that well-documented effects of action video-game training might be due to the sustained deployment of attention to multiple dynamic targets while concurrently requiring rapid attending and perception of unpredictable events. PMID:27901026

  18. Neural mechanisms of understanding rational actions: middle temporal gyrus activation by contextual violation.

    PubMed

    Jastorff, Jan; Clavagnier, Simon; Gergely, György; Orban, Guy A

    2011-02-01

    Performing goal-directed actions toward an object in accordance with contextual constraints, such as the presence or absence of an obstacle, has been widely used as a paradigm for assessing the capacity of infants or nonhuman primates to evaluate the rationality of others' actions. Here, we have used this paradigm in a functional magnetic resonance imaging experiment to visualize the cortical regions involved in the assessment of action rationality while controlling for visual differences in the displays and directly correlating magnetic resonance activity with rationality ratings. Bilateral middle temporal gyrus (MTG) regions, anterior to extrastriate body area and the human middle temporal complex, were involved in the visual evaluation of action rationality. These MTG regions are embedded in the superior temporal sulcus regions processing the kinematics of observed actions. Our results suggest that rationality is assessed initially by purely visual computations, combining the kinematics of the action with the physical constraints of the environmental context. The MTG region seems to be sensitive to the contingent relationship between a goal-directed biological action and its relevant environmental constraints, showing increased activity when the expected pattern of rational goal attainment is violated.

  19. Supporting Children in Mastering Temporal Relations of Stories: The TERENCE Learning Approach

    ERIC Educational Resources Information Center

    Di Mascio, Tania; Gennari, Rosella; Melonio, Alessandra; Tarantino, Laura

    2016-01-01

    Though temporal reasoning is a key factor for text comprehension, existing proposals for visualizing temporal information and temporal connectives proves to be inadequate for children, not only for their levels of abstraction and detail, but also because they rely on pre-existing mental models of time and temporal connectives, while in the case of…

  20. Video quality assessment using motion-compensated temporal filtering and manifold feature similarity

    PubMed Central

    Yu, Mei; Jiang, Gangyi; Shao, Feng; Peng, Zongju

    2017-01-01

    Well-performed Video quality assessment (VQA) method should be consistent with human visual systems for better prediction accuracy. In this paper, we propose a VQA method using motion-compensated temporal filtering (MCTF) and manifold feature similarity. To be more specific, a group of frames (GoF) is first decomposed into a temporal high-pass component (HPC) and a temporal low-pass component (LPC) by MCTF. Following this, manifold feature learning (MFL) and phase congruency (PC) are used to predict the quality of temporal LPC and temporal HPC respectively. The quality measures of the LPC and the HPC are then combined as GoF quality. A temporal pooling strategy is subsequently used to integrate GoF qualities into an overall video quality. The proposed VQA method appropriately processes temporal information in video by MCTF and temporal pooling strategy, and simulate human visual perception by MFL. Experiments on publicly available video quality database showed that in comparison with several state-of-the-art VQA methods, the proposed VQA method achieves better consistency with subjective video quality and can predict video quality more accurately. PMID:28445489

  1. Solar physics applications of computer graphics and image processing

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.

    1985-01-01

    Computer graphics devices coupled with computers and carefully developed software provide new opportunities to achieve insight into the geometry and time evolution of scalar, vector, and tensor fields and to extract more information quickly and cheaply from the same image data. Two or more different fields which overlay in space can be calculated from the data (and the physics), then displayed from any perspective, and compared visually. The maximum regions of one field can be compared with the gradients of another. Time changing fields can also be compared. Images can be added, subtracted, transformed, noise filtered, frequency filtered, contrast enhanced, color coded, enlarged, compressed, parameterized, and histogrammed, in whole or section by section. Today it is possible to process multiple digital images to reveal spatial and temporal correlations and cross correlations. Data from different observatories taken at different times can be processed, interpolated, and transformed to a common coordinate system.

  2. A GIS-based Upscaling Estimation of Nutrient Runoff Losses from Rice Paddy Fields to a Regional Level.

    PubMed

    Sun, Xiaoxiao; Liang, Xinqiang; Zhang, Feng; Fu, Chaodong

    2016-11-01

    Nutrient runoff losses from cropping fields can lead to nonpoint source pollution; however, the level of nutrient export is difficult to evaluate, particularly at the regional scale. This study aimed to establish a novel yet simple approach for estimating total nitrogen (TN) and total phosphorus (TP) runoff losses from regional paddy fields. In this approach, temporal changes of nutrient concentrations in floodwater were coupled with runoff-processing functions in rice ( L.) fields to calculate nutrient runoff losses for three site-specific field experiments. Validation experiments verified the accuracy of this method. The geographic information system technique was used to upscale and visualize the TN and TP runoff losses from field to regional scales. The results indicated that nutrient runoff losses had significant spatio-temporal variation characteristics during rice seasons, which were positively related to fertilizer rate and precipitation. The average runoff losses over five study seasons were 20.21 kg N ha for TN and 0.76 kg P ha for TP. Scenario analysis showed that TN and TP losses dropped by 7.64 and 3.0%, respectively, for each 10% reduction of fertilizer input. For alternate wetting and drying water management, the corresponding reduction ratio was 24.7 and 14.0% respectively. Our results suggest that, although both water and fertilizer management can mitigate nutrient runoff losses, the former is significantly more effective. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  3. Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception

    PubMed Central

    Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas

    2011-01-01

    How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432

  4. Macular Thickness Assessment in Patients with Glaucoma and Its Correlation with Visual Fields

    PubMed Central

    Vaz, Fernando T; Ramalho, Mário; Pedrosa, Catarina; Lisboa, Maria; Kaku, Paulo; Esperancinha, Florindo

    2016-01-01

    Aim To determine the relationship between macular thickness (MT) and visual field (VF) parameters, as well as with changes in the retinal nerve fiber layer (RNFL) thickness in patients with glaucoma and ocular hypertension (OH). Materials and methods Cross-sectional statistical analysis of spectral domain optical coherence tomography (SD-OCT) compared with several VF parameters (mean defect - MD and loss variance - LV), in a nonrandom sample of 70 eyes from patients with glaucoma or OH. Statistical analysis was performed using Statistical Package for Social Sciences®. The correlation coefficient used was determined by Spearman correlation and the value of p < 0.05 was considered statistically significant. Results A significant correlation was seen between VF parameters and decrease in MT (MD: r = –0.363, p = 0.002; LV: r=–0.378, p = 0.001). The results were more significant when we compared the LV in the group with average MT 270 to 300 μm (r = –0.413, p = 0.015). Asymmetry between the superior macula and inferior macula correlated with LV (r = 0.432, p = 0.019) in the group with MT < 270 μm. There was also a significant correlation between thinning of superior-temporal and inferior-temporal RNFL and the decrease of the superior and inferior MT respectively (p < 0.001). Conclusion Spectral domain optical coherence tomography measurements of retinal thickness in the macula correlate with VF parameters and RNFL parameters in glaucoma patients. This relationship was first demonstrated with static computerized perimetry made with Octopus 101®. These results can be a valuable aid for evaluating and monitoring of glaucoma patients, establishing a correlation between structure and function. Measurements of retinal thickness in the macula may be an additional instrument for early detection of structural changes and its correlation with functional defects. How to cite this article Mota M, Vaz FT, Ramalho M, Pedrosa C, Lisboa M, Kaku P, Esperancinha F. Macular Thickness Assessment in Patients with Glaucoma and Its Correlation with Visual Fields. J Curr Glaucoma Pract 2016;10(3):85-90. PMID:27857487

  5. In vivo molecular and genomic imaging: new challenges for imaging physics.

    PubMed

    Cherry, Simon R

    2004-02-07

    The emerging and rapidly growing field of molecular and genomic imaging is providing new opportunities to directly visualize the biology of living organisms. By combining our growing knowledge regarding the role of specific genes and proteins in human health and disease, with novel ways to target these entities in a manner that produces an externally detectable signal, it is becoming increasingly possible to visualize and quantify specific biological processes in a non-invasive manner. All the major imaging modalities are contributing to this new field, each with its unique mechanisms for generating contrast and trade-offs in spatial resolution, temporal resolution and sensitivity with respect to the biological process of interest. Much of the development in molecular imaging is currently being carried out in animal models of disease, but as the field matures and with the development of more individualized medicine and the molecular targeting of new therapeutics, clinical translation is inevitable and will likely forever change our approach to diagnostic imaging. This review provides an introduction to the field of molecular imaging for readers who are not experts in the biological sciences and discusses the opportunities to apply a broad range of imaging technologies to better understand the biology of human health and disease. It also provides a brief review of the imaging technology (particularly for x-ray, nuclear and optical imaging) that is being developed to support this new field.

  6. TOPICAL REVIEW: In vivo molecular and genomic imaging: new challenges for imaging physics

    NASA Astrophysics Data System (ADS)

    Cherry, Simon R.

    2004-02-01

    The emerging and rapidly growing field of molecular and genomic imaging is providing new opportunities to directly visualize the biology of living organisms. By combining our growing knowledge regarding the role of specific genes and proteins in human health and disease, with novel ways to target these entities in a manner that produces an externally detectable signal, it is becoming increasingly possible to visualize and quantify specific biological processes in a non-invasive manner. All the major imaging modalities are contributing to this new field, each with its unique mechanisms for generating contrast and trade-offs in spatial resolution, temporal resolution and sensitivity with respect to the biological process of interest. Much of the development in molecular imaging is currently being carried out in animal models of disease, but as the field matures and with the development of more individualized medicine and the molecular targeting of new therapeutics, clinical translation is inevitable and will likely forever change our approach to diagnostic imaging. This review provides an introduction to the field of molecular imaging for readers who are not experts in the biological sciences and discusses the opportunities to apply a broad range of imaging technologies to better understand the biology of human health and disease. It also provides a brief review of the imaging technology (particularly for x-ray, nuclear and optical imaging) that is being developed to support this new field.

  7. Observing temporal order in living processes: on the role of time in embryology on the cell level in the 1870s and post-2000.

    PubMed

    Bock von Wülfingen, Bettina

    2015-03-01

    The article analyses the role of time in the visual culture of two phases in embryological research: at the end of the nineteenth century, and in the years around 2000. The first case study involves microscopical cytology, the second reproductive genetics. In the 1870s we observe the first of a series of abstractions in research methodology on conception and development, moving from a method propagated as the observation of the "real" living object to the production of stained and fixated objects that are then aligned in temporal order. This process of abstraction ultimately fosters a dissociation between space and time in the research phenomenon, which after 2000 is problematized and explicitly tackled in embryology. Mass data computing made it possible partially to re-include temporal complexity in reproductive genetics in certain, though not all, fields of reproductive genetics. Here research question, instrument and modelling interact in ways that produce very different temporal relationships. Specifically, this article suggests that the different techniques in the late nineteenth century and around 2000 were employed in order to align the time of the researcher with that of the phenomenon and to economize the researcher's work in interaction with the research material's own temporal challenges.

  8. Ultrafast Microscopy of Energy and Charge Transport

    NASA Astrophysics Data System (ADS)

    Huang, Libai

    The frontier in solar energy research now lies in learning how to integrate functional entities across multiple length scales to create optimal devices. Advancing the field requires transformative experimental tools that probe energy transfer processes from the nano to the meso lengthscales. To address this challenge, we aim to understand multi-scale energy transport across both multiple length and time scales, coupling simultaneous high spatial, structural, and temporal resolution. In my talk, I will focus on our recent progress on visualization of exciton and charge transport in solar energy harvesting materials from the nano to mesoscale employing ultrafast optical nanoscopy. With approaches that combine spatial and temporal resolutions, we have recently revealed a new singlet-mediated triplet transport mechanism in certain singlet fission materials. This work demonstrates a new triplet exciton transport mechanism leading to favorable long-range triplet exciton diffusion on the picosecond and nanosecond timescales for solar cell applications. We have also performed a direct measurement of carrier transport in space and in time by mapping carrier density with simultaneous ultrafast time resolution and 50 nm spatial precision in perovskite thin films using transient absorption microscopy. These results directly visualize long-range carrier transport of 220nm in 2 ns for solution-processed polycrystalline CH3NH3PbI3 thin films. The spatially and temporally resolved measurements reported here underscore the importance of the local morphology and establish an important first step towards discerning the underlying transport properties of perovskite materials.

  9. Visuocortical Changes During Delay and Trace Aversive Conditioning: Evidence From Steady-State Visual Evoked Potentials

    PubMed Central

    Miskovic, Vladimir; Keil, Andreas

    2015-01-01

    The visual system is biased towards sensory cues that have been associated with danger or harm through temporal co-occurrence. An outstanding question about conditioning-induced changes in visuocortical processing is the extent to which they are driven primarily by top-down factors such as expectancy or by low-level factors such as the temporal proximity between conditioned stimuli and aversive outcomes. Here, we examined this question using two different differential aversive conditioning experiments: participants learned to associate a particular grating stimulus with an aversive noise that was presented either in close temporal proximity (delay conditioning experiment) or after a prolonged stimulus-free interval (trace conditioning experiment). In both experiments we probed cue-related cortical responses by recording steady-state visual evoked potentials (ssVEPs). Although behavioral ratings indicated that all participants successfully learned to discriminate between the grating patterns that predicted the presence versus absence of the aversive noise, selective amplification of population-level responses in visual cortex for the conditioned danger signal was observed only when the grating and the noise were temporally contiguous. Our findings are in line with notions purporting that changes in the electrocortical response of visual neurons induced by aversive conditioning are a product of Hebbian associations among sensory cell assemblies rather than being driven entirely by expectancy-based, declarative processes. PMID:23398582

  10. Visual cortex extrastriate body-selective area activation in congenitally blind people "seeing" by using sounds.

    PubMed

    Striem-Amit, Ella; Amedi, Amir

    2014-03-17

    Vision is by far the most prevalent sense for experiencing others' body shapes, postures, actions, and intentions, and its congenital absence may dramatically hamper body-shape representation in the brain. We investigated whether the absence of visual experience and limited exposure to others' body shapes could still lead to body-shape selectivity. We taught congenitally fully-blind adults to perceive full-body shapes conveyed through a sensory-substitution algorithm topographically translating images into soundscapes [1]. Despite the limited experience of the congenitally blind with external body shapes (via touch of close-by bodies and for ~10 hr via soundscapes), once the blind could retrieve body shapes via soundscapes, they robustly activated the visual cortex, specifically the extrastriate body area (EBA; [2]). Furthermore, body selectivity versus textures, objects, and faces in both the blind and sighted control groups was not found in the temporal (auditory) or parietal (somatosensory) cortex but only in the visual EBA. Finally, resting-state data showed that the blind EBA is functionally connected to the temporal cortex temporal-parietal junction/superior temporal sulcus Theory-of-Mind areas [3]. Thus, the EBA preference is present without visual experience and with little exposure to external body-shape information, supporting the view that the brain has a sensory-independent, task-selective supramodal organization rather than a sensory-specific organization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands

    PubMed Central

    Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.

    2013-01-01

    The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203

  12. Context and competition in the capture of visual attention.

    PubMed

    Hickey, Clayton; Theeuwes, Jan

    2011-10-01

    Competition-based models of visual attention propose that perceptual ambiguity is resolved through inhibition, which is stronger when objects share a greater number of neural receptive fields (RFs). According to this theory, the misallocation of attention to a salient distractor--that is, the capture of attention--can be indexed in RF-scaled interference costs. We used this pattern to investigate distractor-related costs in visual search across several manipulations of temporal context. Distractor costs are generally larger under circumstances in which the distractor can be defined by features that have recently characterised the target, suggesting that capture occurs in these trials. However, our results show that search for a target in the presence of a salient distractor also produces RF-scaled costs when the features defining the target and distractor do not vary from trial to trial. Contextual differences in distractor costs appear to reflect something other than capture, perhaps a qualitative difference in the type of attentional mechanism deployed to the distractor.

  13. Spectrally queued feature selection for robotic visual odometery

    NASA Astrophysics Data System (ADS)

    Pirozzo, David M.; Frederick, Philip A.; Hunt, Shawn; Theisen, Bernard; Del Rose, Mike

    2011-01-01

    Over the last two decades, research in Unmanned Vehicles (UV) has rapidly progressed and become more influenced by the field of biological sciences. Researchers have been investigating mechanical aspects of varying species to improve UV air and ground intrinsic mobility, they have been exploring the computational aspects of the brain for the development of pattern recognition and decision algorithms and they have been exploring perception capabilities of numerous animals and insects. This paper describes a 3 month exploratory applied research effort performed at the US ARMY Research, Development and Engineering Command's (RDECOM) Tank Automotive Research, Development and Engineering Center (TARDEC) in the area of biologically inspired spectrally augmented feature selection for robotic visual odometry. The motivation for this applied research was to develop a feasibility analysis on multi-spectrally queued feature selection, with improved temporal stability, for the purposes of visual odometry. The intended application is future semi-autonomous Unmanned Ground Vehicle (UGV) control as the richness of data sets required to enable human like behavior in these systems has yet to be defined.

  14. A Flexible Approach for the Statistical Visualization of Ensemble Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potter, K.; Wilson, A.; Bremer, P.

    2009-09-29

    Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less

  15. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes

    PubMed Central

    Galeazzi, Juan M.; Navajas, Joaquín; Mender, Bedeho M. W.; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M.

    2016-01-01

    ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views. PMID:27253452

  16. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes.

    PubMed

    Galeazzi, Juan M; Navajas, Joaquín; Mender, Bedeho M W; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M

    2016-01-01

    Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant's gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views.

  17. Close similarity between spatiotemporal frequency tunings of human cortical responses and involuntary manual following responses to visual motion.

    PubMed

    Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki

    2009-02-01

    Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.

  18. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing

    PubMed Central

    Zhao, Jing; Kwok, Rosa K. W.; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2017-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency. PMID:28119663

  19. Underlying Skills of Oral and Silent Reading Fluency in Chinese: Perspective of Visual Rapid Processing.

    PubMed

    Zhao, Jing; Kwok, Rosa K W; Liu, Menglian; Liu, Hanlong; Huang, Chen

    2016-01-01

    Reading fluency is a critical skill to improve the quality of our daily life and working efficiency. The majority of previous studies focused on oral reading fluency rather than silent reading fluency, which is a much more dominant reading mode that is used in middle and high school and for leisure reading. It is still unclear whether the oral and silent reading fluency involved the same underlying skills. To address this issue, the present study examined the relationship between the visual rapid processing and Chinese reading fluency in different modes. Fifty-eight undergraduate students took part in the experiment. The phantom contour paradigm and the visual 1-back task were adopted to measure the visual rapid temporal and simultaneous processing respectively. These two tasks reflected the temporal and spatial dimensions of visual rapid processing separately. We recorded the temporal threshold in the phantom contour task, as well as reaction time and accuracy in the visual 1-back task. Reading fluency was measured in both single-character and sentence levels. Fluent reading of single characters was assessed with a paper-and-pencil lexical decision task, and a sentence verification task was developed to examine reading fluency on a sentence level. The reading fluency test in each level was conducted twice (i.e., oral reading and silent reading). Reading speed and accuracy were recorded. The correlation analysis showed that the temporal threshold in the phantom contour task did not correlate with the scores of the reading fluency tests. Although, the reaction time in visual 1-back task correlated with the reading speed of both oral and silent reading fluency, the comparison of the correlation coefficients revealed a closer relationship between the visual rapid simultaneous processing and silent reading. Furthermore, the visual rapid simultaneous processing exhibited a significant contribution to reading fluency in silent mode but not in oral reading mode. These findings suggest that the underlying mechanism between oral and silent reading fluency is different at the beginning of the basic visual coding. The current results also might reveal a potential modulation of the language characteristics of Chinese on the relationship between visual rapid processing and reading fluency.

  20. Visual Memory in Post-Anterior Right Temporal Lobectomy Patients and Adult Normative Data for the Brown Location Test (BLT)

    PubMed Central

    Brown, Franklin C.; Tuttle, Erin; Westerveld, Michael; Ferraro, F. Richard; Chmielowiec, Teresa; Vandemore, Michelle; Gibson-Beverly, Gina; Bemus, Lisa; Roth, Robert M.; Blumenfeld, Hal; Spencer, Dennis D.; Spencer, Susan S

    2010-01-01

    Several large and meta-analytic studies have failed to support a consistent relationship between visual or “nonverbal” memory deficits and right mesial temporal lobe changes. However, the Brown Location Test (BLT) is a recently developed dot location learning and memory test that uses a nonsymmetrical array and provides control over many of the confounding variables (e.g., verbal influence and drawing requirements) inherent in other measures of visual memory. In the present investigation, we evaluated the clinical utility of the BLT in patients who had undergone left or right anterior mesial temporal lobectomies. We also provide adult normative data of 298 healthy adults in order to provide standardized scores. Results revealed significantly worse performance on the BLT in the right as compared to left lobectomy group and the healthy adult normative sample. The present findings support a role for the right anterior-mesial temporal lobe in dot location learning and memory. PMID:20056493

  1. Visual Benefits in Apparent Motion Displays: Automatically Driven Spatial and Temporal Anticipation Are Partially Dissociated

    PubMed Central

    Ahrens, Merle-Marie; Veniero, Domenica; Gross, Joachim; Harvey, Monika; Thut, Gregor

    2015-01-01

    Many behaviourally relevant sensory events such as motion stimuli and speech have an intrinsic spatio-temporal structure. This will engage intentional and most likely unintentional (automatic) prediction mechanisms enhancing the perception of upcoming stimuli in the event stream. Here we sought to probe the anticipatory processes that are automatically driven by rhythmic input streams in terms of their spatial and temporal components. To this end, we employed an apparent visual motion paradigm testing the effects of pre-target motion on lateralized visual target discrimination. The motion stimuli either moved towards or away from peripheral target positions (valid vs. invalid spatial motion cueing) at a rhythmic or arrhythmic pace (valid vs. invalid temporal motion cueing). Crucially, we emphasized automatic motion-induced anticipatory processes by rendering the motion stimuli non-predictive of upcoming target position (by design) and task-irrelevant (by instruction), and by creating instead endogenous (orthogonal) expectations using symbolic cueing. Our data revealed that the apparent motion cues automatically engaged both spatial and temporal anticipatory processes, but that these processes were dissociated. We further found evidence for lateralisation of anticipatory temporal but not spatial processes. This indicates that distinct mechanisms may drive automatic spatial and temporal extrapolation of upcoming events from rhythmic event streams. This contrasts with previous findings that instead suggest an interaction between spatial and temporal attention processes when endogenously driven. Our results further highlight the need for isolating intentional from unintentional processes for better understanding the various anticipatory mechanisms engaged in processing behaviourally relevant stimuli with predictable spatio-temporal structure such as motion and speech. PMID:26623650

  2. A system to simulate and reproduce audio-visual environments for spatial hearing research.

    PubMed

    Seeber, Bernhard U; Kerber, Stefan; Hafter, Ervin R

    2010-02-01

    The article reports the experience gained from two implementations of the "Simulated Open-Field Environment" (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a "Swiss army knife" tool for auditory, spatial hearing and audio-visual research. Crown Copyright 2009. Published by Elsevier B.V. All rights reserved.

  3. A System to Simulate and Reproduce Audio-Visual Environments for Spatial Hearing Research

    PubMed Central

    Seeber, Bernhard U.; Kerber, Stefan; Hafter, Ervin R.

    2009-01-01

    The article reports the experience gained from two implementations of the “Simulated Open-Field Environment” (SOFE), a setup that allows sounds to be played at calibrated levels over a wide frequency range from multiple loudspeakers in an anechoic chamber. Playing sounds from loudspeakers in the free-field has the advantage that each participant listens with their own ears, and individual characteristics of the ears are captured in the sound they hear. This makes an easy and accurate comparison between various listeners with and without hearing devices possible. The SOFE uses custom calibration software to assure individual equalization of each loudspeaker. Room simulation software creates the spatio-temporal reflection pattern of sound sources in rooms which is played via the SOFE loudspeakers. The sound playback system is complemented by a video projection facility which can be used to collect or give feedback or to study auditory-visual interaction. The article discusses acoustical and technical requirements for accurate sound playback against the specific needs in hearing research. An introduction to software concepts is given which allow easy, high-level control of the setup and thus fast experimental development, turning the SOFE into a “Swiss army knife” tool for auditory, spatial hearing and audio-visual research. PMID:19909802

  4. The influence of surround suppression on adaptation effects in primary visual cortex

    PubMed Central

    Wissig, Stephanie C.

    2012-01-01

    Adaptation, the prolonged presentation of stimuli, has been used to probe mechanisms of visual processing in physiological, imaging, and perceptual studies. Previous neurophysiological studies have measured adaptation effects by using stimuli tailored to evoke robust responses in individual neurons. This approach provides an incomplete view of how an adapter alters the representation of sensory stimuli by a population of neurons with diverse functional properties. We implanted microelectrode arrays in primary visual cortex (V1) of macaque monkeys and measured orientation tuning and contrast sensitivity in populations of neurons before and after prolonged adaptation. Whereas previous studies in V1 have reported that adaptation causes stimulus-specific suppression of responsivity and repulsive shifts in tuning preference, we have found that adaptation can also lead to response facilitation and shifts in tuning toward the adapter. To explain this range of effects, we have proposed and tested a simple model that employs stimulus-specific suppression in both the receptive field and the spatial surround. The predicted effects on tuning depend on the relative drive provided by the adapter to these two receptive field components. Our data reveal that adaptation can have a much richer repertoire of effects on neuronal responsivity and tuning than previously considered and suggest an intimate mechanistic relationship between spatial and temporal contextual effects. PMID:22423001

  5. Brain maps, great and small: lessons from comparative studies of primate visual cortical organization

    PubMed Central

    Rosa, Marcello G.P; Tweedale, Rowan

    2005-01-01

    In this paper, we review evidence from comparative studies of primate cortical organization, highlighting recent findings and hypotheses that may help us to understand the rules governing evolutionary changes of the cortical map and the process of formation of areas during development. We argue that clear unequivocal views of cortical areas and their homologies are more likely to emerge for ‘core’ fields, including the primary sensory areas, which are specified early in development by precise molecular identification steps. In primates, the middle temporal area is probably one of these primordial cortical fields. Areas that form at progressively later stages of development correspond to progressively more recent evolutionary events, their development being less firmly anchored in molecular specification. The certainty with which areal boundaries can be delimited, and likely homologies can be assigned, becomes increasingly blurred in parallel with this evolutionary/developmental sequence. For example, while current concepts for the definition of cortical areas have been vindicated in allowing a clarification of the organization of the New World monkey ‘third tier’ visual cortex (the third and dorsomedial areas, V3 and DM), our analyses suggest that more flexible mapping criteria may be needed to unravel the organization of higher-order visual association and polysensory areas. PMID:15937007

  6. A novel mechanism for mechanosensory-based rheotaxis in larval zebrafish.

    PubMed

    Oteiza, Pablo; Odstrcil, Iris; Lauder, George; Portugues, Ruben; Engert, Florian

    2017-07-27

    When flying or swimming, animals must adjust their own movement to compensate for displacements induced by the flow of the surrounding air or water. These flow-induced displacements can most easily be detected as visual whole-field motion with respect to the animal's frame of reference. Despite this, many aquatic animals consistently orient and swim against oncoming flows (a behaviour known as rheotaxis) even in the absence of visual cues. How animals achieve this task, and its underlying sensory basis, is still unknown. Here we show that, in the absence of visual information, larval zebrafish (Danio rerio) perform rheotaxis by using flow velocity gradients as navigational cues. We present behavioural data that support a novel algorithm based on such local velocity gradients that fish use to avoid getting dragged by flowing water. Specifically, we show that fish use their mechanosensory lateral line to first sense the curl (or vorticity) of the local velocity vector field to detect the presence of flow and, second, to measure its temporal change after swim bouts to deduce flow direction. These results reveal an elegant navigational strategy based on the sensing of flow velocity gradients and provide a comprehensive behavioural algorithm, also applicable for robotic design, that generalizes to a wide range of animal behaviours in moving fluids.

  7. The capacity limitations of orientation summary statistics

    PubMed Central

    Attarha, Mouna; Moore, Cathleen M.

    2015-01-01

    The simultaneous–sequential method was used to test the processing capacity of establishing mean orientation summaries. Four clusters of oriented Gabor patches were presented in the peripheral visual field. One of the clusters had a mean orientation that was tilted either left or right while the mean orientations of the other three clusters were roughly vertical. All four clusters were presented at the same time in the simultaneous condition whereas the clusters appeared in temporal subsets of two in the sequential condition. Performance was lower when the means of all four clusters had to be processed concurrently than when only two had to be processed in the same amount of time. The advantage for establishing fewer summaries at a given time indicates that the processing of mean orientation engages limited-capacity processes (Experiment 1). This limitation cannot be attributed to crowding, low target-distractor discriminability, or a limited-capacity comparison process (Experiments 2 and 3). In contrast to the limitations of establishing multiple summary representations, establishing a single summary representation unfolds without interference (Experiment 4). When interpreted in the context of recent work on the capacity of summary statistics, these findings encourage reevaluation of the view that early visual perception consists of summary statistic representations that unfold independently across multiple areas of the visual field. PMID:25810160

  8. The associations between multisensory temporal processing and symptoms of schizophrenia.

    PubMed

    Stevenson, Ryan A; Park, Sohee; Cochran, Channing; McIntosh, Lindsey G; Noel, Jean-Paul; Barense, Morgan D; Ferber, Susanne; Wallace, Mark T

    2017-01-01

    Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Neural signatures of lexical tone reading.

    PubMed

    Kwok, Veronica P Y; Wang, Tianfu; Chen, Siping; Yakpo, Kofi; Zhu, Linlin; Fox, Peter T; Tan, Li Hai

    2015-01-01

    Research on how lexical tone is neuroanatomically represented in the human brain is central to our understanding of cortical regions subserving language. Past studies have exclusively focused on tone perception of the spoken language, and little is known as to the lexical tone processing in reading visual words and its associated brain mechanisms. In this study, we performed two experiments to identify neural substrates in Chinese tone reading. First, we used a tone judgment paradigm to investigate tone processing of visually presented Chinese characters. We found that, relative to baseline, tone perception of printed Chinese characters were mediated by strong brain activation in bilateral frontal regions, left inferior parietal lobule, left posterior middle/medial temporal gyrus, left inferior temporal region, bilateral visual systems, and cerebellum. Surprisingly, no activation was found in superior temporal regions, brain sites well known for speech tone processing. In activation likelihood estimation (ALE) meta-analysis to combine results of relevant published studies, we attempted to elucidate whether the left temporal cortex activities identified in Experiment one is consistent with those found in previous studies of auditory lexical tone perception. ALE results showed that only the left superior temporal gyrus and putamen were critical in auditory lexical tone processing. These findings suggest that activation in the superior temporal cortex associated with lexical tone perception is modality-dependent. © 2014 Wiley Periodicals, Inc.

  10. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  11. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  12. Left temporal and temporoparietal brain activity depends on depth of word encoding: a magnetoencephalographic study in healthy young subjects.

    PubMed

    Walla, P; Hufnagl, B; Lindinger, G; Imhof, H; Deecke, L; Lang, W

    2001-03-01

    Using a 143-channel whole-head magnetoencephalograph (MEG) we recorded the temporal changes of brain activity from 26 healthy young subjects (14 females) related to shallow perceptual and deep semantic word encoding. During subsequent recognition tests, the subjects had to recognize the previously encoded words which were interspersed with new words. The resulting mean memory performances across all subjects clearly mirrored the different levels of encoding. The grand averaged event-related fields (ERFs) associated with perceptual and semantic word encoding differed significantly between 200 and 550 ms after stimulus onset mainly over left superior temporal and left superior parietal sensors. Semantic encoding elicited higher brain activity than perceptual encoding. Source localization procedures revealed that neural populations of the left temporal and temporoparietal brain areas showed different activity strengths across the whole group of subjects depending on depth of word encoding. We suggest that the higher brain activity associated with deep encoding as compared to shallow encoding was due to the involvement of more neural systems during the processing of visually presented words. Deep encoding required more energy than shallow encoding but for all that led to a better memory performance. Copyright 2001 Academic Press.

  13. Animation of natural scene by virtual eye-movements evokes high precision and low noise in V1 neurons

    PubMed Central

    Baudot, Pierre; Levy, Manuel; Marre, Olivier; Monier, Cyril; Pananceau, Marc; Frégnac, Yves

    2013-01-01

    Synaptic noise is thought to be a limiting factor for computational efficiency in the brain. In visual cortex (V1), ongoing activity is present in vivo, and spiking responses to simple stimuli are highly unreliable across trials. Stimulus statistics used to plot receptive fields, however, are quite different from those experienced during natural visuomotor exploration. We recorded V1 neurons intracellularly in the anaesthetized and paralyzed cat and compared their spiking and synaptic responses to full field natural images animated by simulated eye-movements to those evoked by simpler (grating) or higher dimensionality statistics (dense noise). In most cells, natural scene animation was the only condition where high temporal precision (in the 10–20 ms range) was maintained during sparse and reliable activity. At the subthreshold level, irregular but highly reproducible membrane potential dynamics were observed, even during long (several 100 ms) “spike-less” periods. We showed that both the spatial structure of natural scenes and the temporal dynamics of eye-movements increase the signal-to-noise ratio by a non-linear amplification of the signal combined with a reduction of the subthreshold contextual noise. These data support the view that the sparsening and the time precision of the neural code in V1 may depend primarily on three factors: (1) broadband input spectrum: the bandwidth must be rich enough for recruiting optimally the diversity of spatial and time constants during recurrent processing; (2) tight temporal interplay of excitation and inhibition: conductance measurements demonstrate that natural scene statistics narrow selectively the duration of the spiking opportunity window during which the balance between excitation and inhibition changes transiently and reversibly; (3) signal energy in the lower frequency band: a minimal level of power is needed below 10 Hz to reach consistently the spiking threshold, a situation rarely reached with visual dense noise. PMID:24409121

  14. Animation of natural scene by virtual eye-movements evokes high precision and low noise in V1 neurons.

    PubMed

    Baudot, Pierre; Levy, Manuel; Marre, Olivier; Monier, Cyril; Pananceau, Marc; Frégnac, Yves

    2013-01-01

    Synaptic noise is thought to be a limiting factor for computational efficiency in the brain. In visual cortex (V1), ongoing activity is present in vivo, and spiking responses to simple stimuli are highly unreliable across trials. Stimulus statistics used to plot receptive fields, however, are quite different from those experienced during natural visuomotor exploration. We recorded V1 neurons intracellularly in the anaesthetized and paralyzed cat and compared their spiking and synaptic responses to full field natural images animated by simulated eye-movements to those evoked by simpler (grating) or higher dimensionality statistics (dense noise). In most cells, natural scene animation was the only condition where high temporal precision (in the 10-20 ms range) was maintained during sparse and reliable activity. At the subthreshold level, irregular but highly reproducible membrane potential dynamics were observed, even during long (several 100 ms) "spike-less" periods. We showed that both the spatial structure of natural scenes and the temporal dynamics of eye-movements increase the signal-to-noise ratio by a non-linear amplification of the signal combined with a reduction of the subthreshold contextual noise. These data support the view that the sparsening and the time precision of the neural code in V1 may depend primarily on three factors: (1) broadband input spectrum: the bandwidth must be rich enough for recruiting optimally the diversity of spatial and time constants during recurrent processing; (2) tight temporal interplay of excitation and inhibition: conductance measurements demonstrate that natural scene statistics narrow selectively the duration of the spiking opportunity window during which the balance between excitation and inhibition changes transiently and reversibly; (3) signal energy in the lower frequency band: a minimal level of power is needed below 10 Hz to reach consistently the spiking threshold, a situation rarely reached with visual dense noise.

  15. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  16. Asymmetric temporal integration of layer 4 and layer 2/3 inputs in visual cortex.

    PubMed

    Hang, Giao B; Dan, Yang

    2011-01-01

    Neocortical neurons in vivo receive concurrent synaptic inputs from multiple sources, including feedforward, horizontal, and feedback pathways. Layer 2/3 of the visual cortex receives feedforward input from layer 4 and horizontal input from layer 2/3. Firing of the pyramidal neurons, which carries the output to higher cortical areas, depends critically on the interaction of these pathways. Here we examined synaptic integration of inputs from layer 4 and layer 2/3 in rat visual cortical slices. We found that the integration is sublinear and temporally asymmetric, with larger responses if layer 2/3 input preceded layer 4 input. The sublinearity depended on inhibition, and the asymmetry was largely attributable to the difference between the two inhibitory inputs. Interestingly, the asymmetric integration was specific to pyramidal neurons, and it strongly affected their spiking output. Thus via cortical inhibition, the temporal order of activation of layer 2/3 and layer 4 pathways can exert powerful control of cortical output during visual processing.

  17. Low-cost, smartphone based frequency doubling technology visual field testing using virtual reality (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Alawa, Karam A.; Sayed, Mohamed; Arboleda, Alejandro; Durkee, Heather A.; Aguilar, Mariela C.; Lee, Richard K.

    2017-02-01

    Glaucoma is the leading cause of irreversible blindness worldwide. Due to its wide prevalence, effective screening tools are necessary. The purpose of this project is to design and evaluate a system that enables portable, cost effective, smartphone based visual field screening based on frequency doubling technology. The system is comprised of an Android smartphone to display frequency doubling stimuli and handle processing, a Bluetooth remote for user input, and a virtual reality headset to simulate the exam. The LG Nexus 5 smartphone and BoboVR Z3 virtual reality headset were used for their screen size and lens configuration, respectively. The system is capable of running the C-20, N-30, 24-2, and 30-2 testing patterns. Unlike the existing system, the smartphone FDT tests both eyes concurrently by showing the same background to both eyes but only displaying the stimulus to one eye at a time. Both the Humphrey Zeiss FDT and the smartphone FDT were tested on five subjects without a history of ocular disease with the C-20 testing pattern. The smartphone FDT successfully produced frequency doubling stimuli at the correct spatial and temporal frequency. Subjects could not tell which eye was being tested. All five subjects preferred the smartphone FDT to the Humphrey Zeiss FDT due to comfort and ease of use. The smartphone FDT is a low-cost, portable visual field screening device that can be used as a screening tool for glaucoma.

  18. Temporal kinetics of prefrontal modulation of the extrastriate cortex during visual attention.

    PubMed

    Yago, Elena; Duarte, Audrey; Wong, Ting; Barceló, Francisco; Knight, Robert T

    2004-12-01

    Single-unit, event-related potential (ERP), and neuroimaging studies have implicated the prefrontal cortex (PFC) in top-down control of attention and working memory. We conducted an experiment in patients with unilateral PFC damage (n = 8) to assess the temporal kinetics of PFC-extrastriate interactions during visual attention. Subjects alternated attention between the left and the right hemifields in successive runs while they detected target stimuli embedded in streams of repetitive task-irrelevant stimuli (standards). The design enabled us to examine tonic (spatial selection) and phasic (feature selection) PFC-extrastriate interactions. PFC damage impaired performance in the visual field contralateral to lesions, as manifested by both larger reaction times and error rates. Assessment of the extrastriate P1 ERP revealed that the PFC exerts a tonic (spatial selection) excitatory input to the ipsilateral extrastriate cortex as early as 100 msec post stimulus delivery. The PFC exerts a second phasic (feature selection) excitatory extrastriate modulation from 180 to 300 msec, as evidenced by reductions in selection negativity after damage. Finally, reductions of the N2 ERP to target stimuli supports the notion that the PFC exerts a third phasic (target selection) signal necessary for successful template matching during postselection analysis of target features. The results provide electrophysiological evidence of three distinct tonic and phasic PFC inputs to the extrastriate cortex in the initial few hundred milliseconds of stimulus processing. Damage to this network appears to underlie the pervasive deficits in attention observed in patients with prefrontal lesions.

  19. Visual guidance of forward flight in hummingbirds reveals control based on image features instead of pattern velocity.

    PubMed

    Dakin, Roslyn; Fellows, Tyee K; Altshuler, Douglas L

    2016-08-02

    Information about self-motion and obstacles in the environment is encoded by optic flow, the movement of images on the eye. Decades of research have revealed that flying insects control speed, altitude, and trajectory by a simple strategy of maintaining or balancing the translational velocity of images on the eyes, known as pattern velocity. It has been proposed that birds may use a similar algorithm but this hypothesis has not been tested directly. We examined the influence of pattern velocity on avian flight by manipulating the motion of patterns on the walls of a tunnel traversed by Anna's hummingbirds. Contrary to prediction, we found that lateral course control is not based on regulating nasal-to-temporal pattern velocity. Instead, birds closely monitored feature height in the vertical axis, and steered away from taller features even in the absence of nasal-to-temporal pattern velocity cues. For vertical course control, we observed that birds adjusted their flight altitude in response to upward motion of the horizontal plane, which simulates vertical descent. Collectively, our results suggest that birds avoid collisions using visual cues in the vertical axis. Specifically, we propose that birds monitor the vertical extent of features in the lateral visual field to assess distances to the side, and vertical pattern velocity to avoid collisions with the ground. These distinct strategies may derive from greater need to avoid collisions in birds, compared with small insects.

  20. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  1. A Pencil Rescues Impaired Performance on a Visual Discrimination Task in Patients with Medial Temporal Lobe Lesions

    ERIC Educational Resources Information Center

    Knutson, Ashley R.; Hopkins, Ramona O.; Squire, Larry R.

    2013-01-01

    We tested proposals that medial temporal lobe (MTL) structures support not just memory but certain kinds of visual perception as well. Patients with hippocampal lesions or larger MTL lesions attempted to identify the unique object among twin pairs of objects that had a high degree of feature overlap. Patients were markedly impaired under the more…

  2. Multiple asynchronous stimulus- and task-dependent hierarchies (STDH) within the visual brain's parallel processing systems.

    PubMed

    Zeki, Semir

    2016-10-01

    Results from a variety of sources, some many years old, lead ineluctably to a re-appraisal of the twin strategies of hierarchical and parallel processing used by the brain to construct an image of the visual world. Contrary to common supposition, there are at least three 'feed-forward' anatomical hierarchies that reach the primary visual cortex (V1) and the specialized visual areas outside it, in parallel. These anatomical hierarchies do not conform to the temporal order with which visual signals reach the specialized visual areas through V1. Furthermore, neither the anatomical hierarchies nor the temporal order of activation through V1 predict the perceptual hierarchies. The latter shows that we see (and become aware of) different visual attributes at different times, with colour leading form (orientation) and directional visual motion, even though signals from fast-moving, high-contrast stimuli are among the earliest to reach the visual cortex (of area V5). Parallel processing, on the other hand, is much more ubiquitous than commonly supposed but is subject to a barely noticed but fundamental aspect of brain operations, namely that different parallel systems operate asynchronously with respect to each other and reach perceptual endpoints at different times. This re-assessment leads to the conclusion that the visual brain is constituted of multiple, parallel and asynchronously operating task- and stimulus-dependent hierarchies (STDH); which of these parallel anatomical hierarchies have temporal and perceptual precedence at any given moment is stimulus and task related, and dependent on the visual brain's ability to undertake multiple operations asynchronously. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Specificity and timescales of cortical adaptation as inferences about natural movie statistics.

    PubMed

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-10-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.

  4. Specificity and timescales of cortical adaptation as inferences about natural movie statistics

    PubMed Central

    Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia

    2016-01-01

    Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416

  5. Disturbed default mode network connectivity patterns in Alzheimer's disease associated with visual processing.

    PubMed

    Krajcovicova, Lenka; Mikl, Michal; Marecek, Radek; Rektorova, Irena

    2014-01-01

    Changes in connectivity of the posterior node of the default mode network (DMN) were studied when switching from baseline to a cognitive task using functional magnetic resonance imaging. In all, 15 patients with mild to moderate Alzheimer's disease (AD) and 18 age-, gender-, and education-matched healthy controls (HC) participated in the study. Psychophysiological interactions analysis was used to assess the specific alterations in the DMN connectivity (deactivation-based) due to psychological effects from the complex visual scene encoding task. In HC, we observed task-induced connectivity decreases between the posterior cingulate and middle temporal and occipital visual cortices. These findings imply successful involvement of the ventral visual pathway during the visual processing in our HC cohort. In AD, involvement of the areas engaged in the ventral visual pathway was observed only in a small volume of the right middle temporal gyrus. Additional connectivity changes (decreases) in AD were present between the posterior cingulate and superior temporal gyrus when switching from baseline to task condition. These changes are probably related to both disturbed visual processing and the DMN connectivity in AD and reflect deficits and compensatory mechanisms within the large scale brain networks in this patient population. Studying the DMN connectivity using psychophysiological interactions analysis may provide a sensitive tool for exploring early changes in AD and their dynamics during the disease progression.

  6. Spatio-temporal visualization of air-sea CO2 flux and carbon budget using volume rendering

    NASA Astrophysics Data System (ADS)

    Du, Zhenhong; Fang, Lei; Bai, Yan; Zhang, Feng; Liu, Renyi

    2015-04-01

    This paper presents a novel visualization method to show the spatio-temporal dynamics of carbon sinks and sources, and carbon fluxes in the ocean carbon cycle. The air-sea carbon budget and its process of accumulation are demonstrated in the spatial dimension, while the distribution pattern and variation of CO2 flux are expressed by color changes. In this way, we unite spatial and temporal characteristics of satellite data through visualization. A GPU-based direct volume rendering technique using half-angle slicing is adopted to dynamically visualize the released or absorbed CO2 gas with shadow effects. A data model is designed to generate four-dimensional (4D) data from satellite-derived air-sea CO2 flux products, and an out-of-core scheduling strategy is also proposed for on-the-fly rendering of time series of satellite data. The presented 4D visualization method is implemented on graphics cards with vertex, geometry and fragment shaders. It provides a visually realistic simulation and user interaction for real-time rendering. This approach has been integrated into the Information System of Ocean Satellite Monitoring for Air-sea CO2 Flux (IssCO2) for the research and assessment of air-sea CO2 flux in the China Seas.

  7. The relation of object naming and other visual speech production tasks: a large scale voxel-based morphometric study.

    PubMed

    Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia

    2015-01-01

    We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.

  8. Visual cortex responses reflect temporal structure of continuous quasi-rhythmic sensory stimulation.

    PubMed

    Keitel, Christian; Thut, Gregor; Gross, Joachim

    2017-02-01

    Neural processing of dynamic continuous visual input, and cognitive influences thereon, are frequently studied in paradigms employing strictly rhythmic stimulation. However, the temporal structure of natural stimuli is hardly ever fully rhythmic but possesses certain spectral bandwidths (e.g. lip movements in speech, gestures). Examining periodic brain responses elicited by strictly rhythmic stimulation might thus represent ideal, yet isolated cases. Here, we tested how the visual system reflects quasi-rhythmic stimulation with frequencies continuously varying within ranges of classical theta (4-7Hz), alpha (8-13Hz) and beta bands (14-20Hz) using EEG. Our findings substantiate a systematic and sustained neural phase-locking to stimulation in all three frequency ranges. Further, we found that allocation of spatial attention enhances EEG-stimulus locking to theta- and alpha-band stimulation. Our results bridge recent findings regarding phase locking ("entrainment") to quasi-rhythmic visual input and "frequency-tagging" experiments employing strictly rhythmic stimulation. We propose that sustained EEG-stimulus locking can be considered as a continuous neural signature of processing dynamic sensory input in early visual cortices. Accordingly, EEG-stimulus locking serves to trace the temporal evolution of rhythmic as well as quasi-rhythmic visual input and is subject to attentional bias. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Simultaneous density-field visualization and PIV of the Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    Prestridge, Katherine; Rightley, Paul; Benjamin, Robert; Kurnit, Norman; Boxx, Isaac; Vorobieff, Peter

    1999-11-01

    We describe a highly-detailed experimental characterization of the Richtmyer-Meshkov instability. A vertical curtain of heavy gas (SF_6) flows into the test section of an air-filled, horizontal shock tube, and the instability evolves after the passage of a Mach 1.2 shock past the curtain. The evolution of the curtain is visualized by seeding the SF6 with small (d ≈ 0.5 μm) glycol/water droplets using a modified theatrical fog generator. Because the event lasts only 1 ms and the initial conditions vary from test to test, rapid and high-resolution (both spatial and temporal) data acquisition is required in order to characterize the initial and dynamic conditions for each experimental event. A customized, frequency-doubled, burst mode Nd:YAG laser and a commercial single-pulse laser are used for the implementation of simultaneous density-field imaging and PIV diagnostics. We have provided data about flow scaling and mixing through image analysis, and PIV data gives us further quantitative physical insight into the evolution of the Richtmyer-Meshkov instability.

  10. A pediatric case of pituitary macroadenoma presenting with pituitary apoplexy and cranial nerve involvement: case report

    PubMed Central

    Özçetin, Mustafa; Karacı, Mehmet; Toroslu, Ertuğ; Edebali, Nurullah

    2016-01-01

    Pituitary adenomas usually arise from the anterior lobe of the pituitary gland and are manifested with hormonal disorders or mass effect. Mass effect usually occurs in nonfunctional tumors. Pituitary adenomas may be manifested with visual field defects or rarely in the form of total oculomotor palsy. Visual field defect is most frequently in the form of bitemporal hemianopsia and superior temporal defect. Sudden loss of vision, papilledema and ophthalmoplegia may be observed. Pituitary apoplexy is defined as an acute clinical syndrome characterized with headache, vomiting, loss of vision, ophthalmoplegia and clouding of consciousness. The problem leading to pituitary apoplexy may be decreased blood supply in the adenoma and hemorrhage following this decrease or hemorrhage alone. In this article, we present a patient who presented with fever, vomiting and sudden loss of vision and limited outward gaze in the left eye following trauma and who was found to have pituitary macroadenoma causing compression of the optic chiasma and optic nerve on the left side on cranial and pituitary magnetic resonance imaging. PMID:27738402

  11. Autoimmune neuroretinopathy secondary to Zika virus infection.

    PubMed

    Burgueño-Montañés, C; Álvarez-Coronado, M; Colunga-Cueva, M

    2018-04-29

    A 40-year-old woman diagnosed with Zika virus infection 6 months before she arrived at this hospital. She referred to a progressive and painless vision loss, of 2 weeks onset after the infection diagnosis. She was treated with topical steroids. Previous visual acuity was recovered, but she still refers to reduced visual field and nyctalopia. Ophthalmologic examination revealed severe retinal sequels, compatible with autoimmune retinopathy. Based on the clinical features and the temporal relationship with Zika virus infection, non-para-neoplastic autoimmune retinopathy was diagnosed and managed with steroids and infliximab. Zika virus can trigger a non-para-neoplastic autoimmune retinopathy. The diagnosis is based on clinical features, and requires early immunosuppressive therapy. Copyright © 2018 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  12. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  13. Audio-visual synchrony and spatial attention enhance processing of dynamic visual stimulation independently and in parallel: A frequency-tagging study.

    PubMed

    Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M

    2017-11-01

    The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Impact of Audio-Visual Asynchrony on Lip-Reading Effects -Neuromagnetic and Psychophysical Study-

    PubMed Central

    Yahata, Izumi; Kanno, Akitake; Sakamoto, Shuichi; Takanashi, Yoshitaka; Takata, Shiho; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio

    2016-01-01

    The effects of asynchrony between audio and visual (A/V) stimuli on the N100m responses of magnetoencephalography in the left hemisphere were compared with those on the psychophysical responses in 11 participants. The latency and amplitude of N100m were significantly shortened and reduced in the left hemisphere by the presentation of visual speech as long as the temporal asynchrony between A/V stimuli was within 100 ms, but were not significantly affected with audio lags of -500 and +500 ms. However, some small effects were still preserved on average with audio lags of 500 ms, suggesting similar asymmetry of the temporal window to that observed in psychophysical measurements, which tended to be more robust (wider) for audio lags; i.e., the pattern of visual-speech effects as a function of A/V lag observed in the N100m in the left hemisphere grossly resembled that in psychophysical measurements on average, although the individual responses were somewhat varied. The present results suggest that the basic configuration of the temporal window of visual effects on auditory-speech perception could be observed from the early auditory processing stage. PMID:28030631

  15. Temporal expectancy in the context of a theory of visual attention.

    PubMed

    Vangkilde, Signe; Petersen, Anders; Bundesen, Claus

    2013-10-19

    Temporal expectation is expectation with respect to the timing of an event such as the appearance of a certain stimulus. In this paper, temporal expectancy is investigated in the context of the theory of visual attention (TVA), and we begin by summarizing the foundations of this theoretical framework. Next, we present a parametric experiment exploring the effects of temporal expectation on perceptual processing speed in cued single-stimulus letter recognition with unspeeded motor responses. The length of the cue-stimulus foreperiod was exponentially distributed with one of six hazard rates varying between blocks. We hypothesized that this manipulation would result in a distinct temporal expectation in each hazard rate condition. Stimulus exposures were varied such that both the temporal threshold of conscious perception (t0 ms) and the perceptual processing speed (v letters s(-1)) could be estimated using TVA. We found that the temporal threshold t0 was unaffected by temporal expectation, but the perceptual processing speed v was a strikingly linear function of the logarithm of the hazard rate of the stimulus presentation. We argue that the effects on the v values were generated by changes in perceptual biases, suggesting that our perceptual biases are directly related to our temporal expectations.

  16. Peripheral refraction in normal infant rhesus monkeys

    PubMed Central

    Hung, Li-Fang; Ramamirtham, Ramkumar; Huang, Juan; Qiao-Grider, Ying; Smith, Earl L.

    2008-01-01

    Purpose To characterize peripheral refractions in infant monkeys. Methods Cross-sectional data for horizontal refractions were obtained from 58 normal rhesus monkeys at 3 weeks of age. Longitudinal data were obtained for both the vertical and horizontal meridians from 17 monkeys. Refractive errors were measured by retinoscopy along the pupillary axis and at eccentricities of 15, 30, and 45 degrees. Axial dimensions and corneal power were measured by ultrasonography and keratometry, respectively. Results In infant monkeys, the degree of radial astigmatism increased symmetrically with eccentricity in all meridians. There were, however, initial nasal-temporal and superior-inferior asymmetries in the spherical-equivalent refractive errors. Specifically, the refractions in the temporal and superior fields were similar to the central ametropia, but the refractions in the nasal and inferior fields were more myopic than the central ametropia and the relative nasal field myopia increased with the degree of central hyperopia. With age, the degree of radial astigmatism decreased in all meridians and the refractions became more symmetrical along both the horizontal and vertical meridians; small degrees of relative myopia were evident in all fields. Conclusions As in adult humans, refractive error varied as a function of eccentricity in infant monkeys and the pattern of peripheral refraction varied with the central refractive error. With age, emmetropization occurred for both central and peripheral refractive errors resulting in similar refractions across the central 45 degrees of the visual field, which may reflect the actions of vision-dependent, growth-control mechanisms operating over a wide area of the posterior globe. PMID:18487366

  17. Space and time aliasing structure is monthly mean polar-orbiting satellite data

    NASA Technical Reports Server (NTRS)

    Zeng, Lixin; Levy, Gad

    1995-01-01

    Monthly mean wind fields from the European Remote Sensing Satellite (ERS1) scatterometer are presented. A banded structure which resembles the satellite subtrack is clearly and consistently apparent in the isotachs as well as the u and v components of the routinely produced fields. The structure also appears in the means of data from other polar-orbiting satellites and instruments. An experiment is designed to trace the cause of the banded structure. The European Centre for Medium-Range Weather Forecast (ECMWF) gridded surface wind analyses are used as a control set. These analyses are also sampled with the ERS1 temporal-spatial samplig pattern to form a simulated scatterometer wind set. Both sets are used to create monthly averages. The banded structures appear in the monthly mean simulated data but do not appear in the control set. It is concluded that the source of the banded structure lies in the spatial and temporal sampling of the polar-orbiting satellite which results in undersampling. The problem involves multiple timescales and space scales, oversampling and under-sampling in space, aliasing in the time and space domains, and preferentially sampled variability. It is shown that commonly used spatial smoothers (or filters), while producing visually pleasing results, also significantly bias the true mean. A three-dimensional spatial-temporal interpolator is designed and used to determine the mean field. It is found to produce satisfactory monthly means from both simulated and real ERS1 data. The implications to climate studies involving polar-orbiting satellite data are discussed.

  18. Temporal precision in the visual pathway through the interplay of excitation and stimulus-driven suppression.

    PubMed

    Butts, Daniel A; Weng, Chong; Jin, Jianzhong; Alonso, Jose-Manuel; Paninski, Liam

    2011-08-03

    Visual neurons can respond with extremely precise temporal patterning to visual stimuli that change on much slower time scales. Here, we investigate how the precise timing of cat thalamic spike trains-which can have timing as precise as 1 ms-is related to the stimulus, in the context of both artificial noise and natural visual stimuli. Using a nonlinear modeling framework applied to extracellular data, we demonstrate that the precise timing of thalamic spike trains can be explained by the interplay between an excitatory input and a delayed suppressive input that resembles inhibition, such that neuronal responses only occur in brief windows where excitation exceeds suppression. The resulting description of thalamic computation resembles earlier models of contrast adaptation, suggesting a more general role for mechanisms of contrast adaptation in visual processing. Thus, we describe a more complex computation underlying thalamic responses to artificial and natural stimuli that has implications for understanding how visual information is represented in the early stages of visual processing.

  19. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  20. Automated Assessment of Visual Quality of Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

Top