Paintings, photographs, and computer graphics are calculated appearances
NASA Astrophysics Data System (ADS)
McCann, John
2012-03-01
Painters reproduce the appearances they see, or visualize. The entire human visual system is the first part of that process, providing extensive spatial processing. Painters have used spatial techniques since the Renaissance to render HDR scenes. Silver halide photography responds to the light falling on single film pixels. Film can only mimic the retinal response of the cones at the start of the visual process. Film cannot mimic the spatial processing in humans. Digital image processing can. This talk studies three dramatic visual illusions and uses the spatial mechanisms found in human vision to interpret their appearances.
Abadie, S; Jardet, C; Colombelli, J; Chaput, B; David, A; Grolleau, J-L; Bedos, P; Lobjois, V; Descargues, P; Rouquette, J
2018-05-01
Human skin is composed of the superimposition of tissue layers of various thicknesses and components. Histological staining of skin sections is the benchmark approach to analyse the organization and integrity of human skin biopsies; however, this approach does not allow 3D tissue visualization. Alternatively, confocal or two-photon microscopy is an effective approach to perform fluorescent-based 3D imaging. However, owing to light scattering, these methods display limited light penetration in depth. The objectives of this study were therefore to combine optical clearing and light-sheet fluorescence microscopy (LSFM) to perform in-depth optical sectioning of 5 mm-thick human skin biopsies and generate 3D images of entire human skin biopsies. A benzyl alcohol and benzyl benzoate solution was used to successfully optically clear entire formalin fixed human skin biopsies, making them transparent. In-depth optical sectioning was performed with LSFM on the basis of tissue-autofluorescence observations. 3D image analysis of optical sections generated with LSFM was performed by using the Amira ® software. This new approach allowed us to observe in situ the different layers and compartments of human skin, such as the stratum corneum, the dermis and epidermal appendages. With this approach, we easily performed 3D reconstruction to visualise an entire human skin biopsy. Finally, we demonstrated that this method is useful to visualise and quantify histological anomalies, such as epidermal hyperplasia. The combination of optical clearing and LSFM has new applications in dermatology and dermatological research by allowing 3D visualization and analysis of whole human skin biopsies. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Multimodal imaging of the human knee down to the cellular level
NASA Astrophysics Data System (ADS)
Schulz, G.; Götz, C.; Müller-Gerbl, M.; Zanette, I.; Zdora, M.-C.; Khimchenko, A.; Deyhle, H.; Thalmann, P.; Müller, B.
2017-06-01
Computed tomography reaches the best spatial resolution for the three-dimensional visualization of human tissues among the available nondestructive clinical imaging techniques. Nowadays, sub-millimeter voxel sizes are regularly obtained. Regarding investigations on true micrometer level, lab-based micro-CT (μCT) has become gold standard. The aim of the present study is firstly the hierarchical investigation of a human knee post mortem using hard X-ray μCT and secondly a multimodal imaging using absorption and phase contrast modes in order to investigate hard (bone) and soft (cartilage) tissues on the cellular level. After the visualization of the entire knee using a clinical CT, a hierarchical imaging study was performed using the lab-system nanotom® m. First, the entire knee was measured with a pixel length of 65 μm. The highest resolution with a pixel length of 3 μm could be achieved after extracting cylindrically shaped plugs from the femoral bones. For the visualization of the cartilage, grating-based phase contrast μCT (I13-2, Diamond Light Source) was performed. With an effective voxel size of 2.3 μm it was possible to visualize individual chondrocytes within the cartilage.
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2015-01-01
The medical curriculum is the main tool representing the entire undergraduate medical education. Due to its complexity and multilayered structure it is of limited use to teachers in medical education for quality improvement purposes. In this study we evaluated three visualizations of curriculum data from a pilot course, using teachers from an undergraduate medical program and applying visual analytics methods. We found that visual analytics can be used to positively impacting analytical reasoning and decision making in medical education through the realization of variables capable to enhance human perception and cognition on complex curriculum data. The positive results derived from our evaluation of a medical curriculum and in a small scale, signify the need to expand this method to an entire medical curriculum. As our approach sustains low levels of complexity it opens a new promising direction in medical education informatics research.
Visual Homing in the Absence of Feature-Based Landmark Information
ERIC Educational Resources Information Center
Gillner, Sabine; Weiss, Anja M.; Mallot, Hanspeter A.
2008-01-01
Despite that fact that landmarks play a prominent role in human navigation, experimental evidence on how landmarks are selected and defined by human navigators remains elusive. Indeed, the concept of a "landmark" is itself not entirely clear. In everyday language, the term landmark refers to salient, distinguishable, and usually nameable objects,…
Are neural correlates of visual consciousness retinotopic?
ffytche, Dominic H; Pins, Delphine
2003-11-14
Some visual neurons code what we see, their defining characteristic being a response profile which mirrors conscious percepts rather than veridical sensory attributes. One issue yet to be resolved is whether, within a given cortical area, conscious visual perception relates to diffuse activity across the entire population of such cells or focal activity within the sub-population mapping the location of the perceived stimulus. Here we investigate the issue in the human brain with fMRI, using a threshold stimulation technique to dissociate perceptual from non-perceptual activity. Our results point to a retinotopic organisation of perceptual activity in early visual areas, with independent perceptual activations for different regions of visual space.
A transparently scalable visualization architecture for exploring the universe.
Fu, Chi-Wing; Hanson, Andrew J
2007-01-01
Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.
Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets
Morvan, Camille; Maloney, Laurence T.
2012-01-01
Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428
Destabilizing effects of visual environment motions simulating eye movements or head movements
NASA Technical Reports Server (NTRS)
White, Keith D.; Shuman, D.; Krantz, J. H.; Woods, C. B.; Kuntz, L. A.
1991-01-01
In the present paper, we explore effects on the human of exposure to a visual virtual environment which has been enslaved to simulate the human user's head movements or eye movements. Specifically, we have studied the capacity of our experimental subjects to maintain stable spatial orientation in the context of moving their entire visible surroundings by using the parameters of the subjects' natural movements. Our index of the subjects' spatial orientation was the extent of involuntary sways of the body while attempting to stand still, as measured by translations and rotations of the head. We also observed, informally, their symptoms of motion sickness.
Aesthetic Response and Cosmic Aesthetic Distance
NASA Astrophysics Data System (ADS)
Madacsi, D.
2013-04-01
For Homo sapiens, the experience of a primal aesthetic response to nature was perhaps a necessary precursor to the arousal of an artistic impulse. Among the likely visual candidates for primal initiators of aesthetic response, arguments can be made in favor of the flower, the human face and form, and the sky and light itself as primordial aesthetic stimulants. Although visual perception of the sensory world of flowers and human faces and forms is mediated by light, it was most certainly in the sky that humans first could respond to the beauty of light per se. It is clear that as a species we do not yet identify and comprehend as nature, or part of nature, the entire universe beyond our terrestrial environs, the universe from which we remain inexorably separated by space and time. However, we now enjoy a technologically-enabled opportunity to probe the ultimate limits of visual aesthetic distance and the origins of human aesthetic response as we remotely explore deep space via the Hubble Space Telescope and its successors.
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
Attraction of position preference by spatial attention throughout human visual cortex.
Klein, Barrie P; Harvey, Ben M; Dumoulin, Serge O
2014-10-01
Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an attention-demanding task at different locations. We show that spatial attention attracts pRF preferred positions across the entire visual field, not just at the attended location. This global change in pRF preferred positions systematically increases up the visual hierarchy. We model these pRF preferred position changes as an interaction between two components: an attention field and a pRF without the influence of attention. This computational model suggests that increasing effects of attention up the hierarchy result primarily from differences in pRF size and that the attention field is similar across the visual hierarchy. A similar attention field suggests that spatial attention transforms different neural response selectivities throughout the visual hierarchy in a similar manner. Copyright © 2014 Elsevier Inc. All rights reserved.
González-Sansón, Gaspar; Aguilar, Consuelo; Hernández, Ivet; Cabrera, Yureidy; Suarez-Montes, Noelis; Bretos, Fernando; Guggenheim, David
2009-09-01
The main goal of the study was to obtain field data to build a baseline of fish assemblage composition that can be used comparatively for future analyses of the impact of human actions in the region. A basic network of 68 sampling stations was defined for the entire region (4,050 km2). Fish assemblage species and size composition was estimated using visual census methods at three different spatial scales: a) entire region, b) inside the main reef area and c) along a human impact coastal gradient. Multivariate numerical analyses revealed habitat type as the main factor inducing spatial variability of fish community composition, while the level of human impact appears to play the main role in fish assemblage composition changes along the coast. A trend of decreasing fish size toward the east supports the theory of more severe human impact due to overfishing and higher urban pollution in that direction. This is the first detailed study along the northwest coast of Cuba that focuses on fish community structure and the natural and human-induced variations at different spatial scales for the entire NW shelf. This research also provides input for a more comprehensive understanding of coastal marine fish communities' status in the Gulf of Mexico basin.
Differential temporal dynamics during visual imagery and perception.
Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj
2018-05-29
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.
Transient cardio-respiratory responses to visually induced tilt illusions
NASA Technical Reports Server (NTRS)
Wood, S. J.; Ramsdell, C. D.; Mullen, T. J.; Oman, C. M.; Harm, D. L.; Paloski, W. H.
2000-01-01
Although the orthostatic cardio-respiratory response is primarily mediated by the baroreflex, studies have shown that vestibular cues also contribute in both humans and animals. We have demonstrated a visually mediated response to illusory tilt in some human subjects. Blood pressure, heart and respiration rate, and lung volume were monitored in 16 supine human subjects during two types of visual stimulation, and compared with responses to real passive whole body tilt from supine to head 80 degrees upright. Visual tilt stimuli consisted of either a static scene from an overhead mirror or constant velocity scene motion along different body axes generated by an ultra-wide dome projection system. Visual vertical cues were initially aligned with the longitudinal body axis. Subjective tilt and self-motion were reported verbally. Although significant changes in cardio-respiratory parameters to illusory tilts could not be demonstrated for the entire group, several subjects showed significant transient decreases in mean blood pressure resembling their initial response to passive head-up tilt. Changes in pulse pressure and a slight elevation in heart rate were noted. These transient responses are consistent with the hypothesis that visual-vestibular input contributes to the initial cardiovascular adjustment to a change in posture in humans. On average the static scene elicited perceived tilt without rotation. Dome scene pitch and yaw elicited perceived tilt and rotation, and dome roll motion elicited perceived rotation without tilt. A significant correlation between the magnitude of physiological and subjective reports could not be demonstrated.
Brain processing of visual information during fast eye movements maintains motor performance.
Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis
2013-01-01
Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.
Human performance in the modern cockpit
NASA Technical Reports Server (NTRS)
Dismukes, R. K.; Cohen, M. M.
1992-01-01
This panel was organized by the Aerospace Human Factors Committee to illustrate behavioral research on the perceptual, cognitive, and group processes that determine crew effectiveness in modern cockpits. Crew reactions to the introduction of highly automated systems in the cockpit will be reported on. Automation can improve operational capabilities and efficiency and can reduce some types of human error, but may also introduce entirely new opportunities for error. The problem solving and decision making strategies used by crews led by captains with various personality profiles will be discussed. Also presented will be computational approaches to modeling the cognitive demands of cockpit operations and the cognitive capabilities and limitations of crew members. Factors contributing to aircrew deviations from standard operating procedures and misuse of checklist, often leading to violations, incidents, or accidents will be examined. The mechanisms of visual perception pilots use in aircraft control and the implications of these mechanisms for effective design of visual displays will be discussed.
Visual Prediction Error Spreads Across Object Features in Human Visual Cortex
Summerfield, Christopher; Egner, Tobias
2016-01-01
Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936
NASA Astrophysics Data System (ADS)
Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José; Fink, Mathias; Boccara, Claude
2018-02-01
Despite obvious improvements in visualization of the in vivo cornea through the faster imaging speeds and higher axial resolutions, cellular imaging stays unresolvable task for OCT, as en face viewing with a high lateral resolution is required. The latter is possible with FFOCT, a method that relies on a camera, moderate numerical aperture (NA) objectives and an incoherent light source to provide en face images with a micrometer-level resolution. Recently, we for the first time demonstrated the ability of FFOCT to capture images from the in vivo human cornea1. In the current paper we present an extensive study of appearance of healthy in vivo human corneas under FFOCT examination. En face corneal images with a micrometer-level resolution were obtained from the three healthy subjects. For each subject it was possible to acquire images through the entire corneal depth and visualize the epithelium structures, Bowman's layer, sub-basal nerve plexus (SNP) fibers, anterior, middle and posterior stroma, endothelial cells with nuclei. Dimensions and densities of the structures visible with FFOCT, are in agreement with those seen by other cornea imaging methods. Cellular-level details in the images obtained together with the relatively large field-of-view (FOV) and contactless way of imaging make this device a promising candidate for becoming a new tool in ophthalmological diagnostics.
Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard
2015-01-01
Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-02-28
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.
2016-01-01
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220
Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X
2016-11-21
Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.
Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola
2016-05-01
Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical cues are critical in orienting infants' visual attention towards a peripheral region of space that is congruent with the number's relative position on a left-to-right oriented representational continuum. This finding provides the first direct evidence that, in humans, the association between numbers and oriented spatial codes occurs before the acquisition of symbols or exposure to formal education, suggesting that the number line is not merely a product of human invention. © 2015 John Wiley & Sons Ltd.
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz
2017-01-01
To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.
Virtual reality: a reality for future military pilotage?
NASA Astrophysics Data System (ADS)
McIntire, John P.; Martinsen, Gary L.; Marasco, Peter L.; Havig, Paul R.
2009-05-01
Virtual reality (VR) systems provide exciting new ways to interact with information and with the world. The visual VR environment can be synthetic (computer generated) or be an indirect view of the real world using sensors and displays. With the potential opportunities of a VR system, the question arises about what benefits or detriments a military pilot might incur by operating in such an environment. Immersive and compelling VR displays could be accomplished with an HMD (e.g., imagery on the visor), large area collimated displays, or by putting the imagery on an opaque canopy. But what issues arise when, instead of viewing the world directly, a pilot views a "virtual" image of the world? Is 20/20 visual acuity in a VR system good enough? To deliver this acuity over the entire visual field would require over 43 megapixels (MP) of display surface for an HMD or about 150 MP for an immersive CAVE system, either of which presents a serious challenge with current technology. Additionally, the same number of sensor pixels would be required to drive the displays to this resolution (and formidable network architectures required to relay this information), or massive computer clusters are necessary to create an entirely computer-generated virtual reality with this resolution. Can we presently implement such a system? What other visual requirements or engineering issues should be considered? With the evolving technology, there are many technological issues and human factors considerations that need to be addressed before a pilot is placed within a virtual cockpit.
Mapping human preictal and ictal haemodynamic networks using simultaneous intracranial EEG-fMRI
Chaudhary, Umair J.; Centeno, Maria; Thornton, Rachel C.; Rodionov, Roman; Vulliemoz, Serge; McEvoy, Andrew W.; Diehl, Beate; Walker, Matthew C.; Duncan, John S.; Carmichael, David W.; Lemieux, Louis
2016-01-01
Accurately characterising the brain networks involved in seizure activity may have important implications for our understanding of epilepsy. Intracranial EEG-fMRI can be used to capture focal epileptic events in humans with exquisite electrophysiological sensitivity and allows for identification of brain structures involved in this phenomenon over the entire brain. We investigated ictal BOLD networks using the simultaneous intracranial EEG-fMRI (icEEG-fMRI) in a 30 year-old male undergoing invasive presurgical evaluation with bilateral depth electrode implantations in amygdalae and hippocampi for refractory temporal lobe epilepsy. One spontaneous focal electrographic seizure was recorded. The aims of the data analysis were firstly to map BOLD changes related to the ictal activity identified on icEEG and secondly to compare different fMRI modelling approaches. Visual inspection of the icEEG showed an onset dominated by beta activity involving the right amygdala and hippocampus lasting 6.4 s (ictal onset phase), followed by gamma activity bilaterally lasting 14.8 s (late ictal phase). The fMRI data was analysed using SPM8 using two modelling approaches: firstly, purely based on the visually identified phases of the seizure and secondly, based on EEG spectral dynamics quantification. For the visual approach the two ictal phases were modelled as ‘ON’ blocks convolved with the haemodynamic response function; in addition the BOLD changes during the 30 s preceding the onset were modelled using a flexible basis set. For the quantitative fMRI modelling approach two models were evaluated: one consisting of the variations in beta and gamma bands power, thereby adding a quantitative element to the visually-derived models, and another based on principal components analysis of the entire spectrogram in attempt to reduce the bias associated with the visual appreciation of the icEEG. BOLD changes related to the visually defined ictal onset phase were revealed in the medial and lateral right temporal lobe. For the late ictal phase, the BOLD changes were remote from the SOZ and in deep brain areas (precuneus, posterior cingulate and others). The two quantitative models revealed BOLD changes involving the right hippocampus, amygdala and fusiform gyrus and in remote deep brain structures and the default mode network-related areas. In conclusion, icEEG-fMRI allowed us to reveal BOLD changes within and beyond the SOZ linked to very localised ictal fluctuations in beta and gamma activity measured in the amygdala and hippocampus. Furthermore, the BOLD changes within the SOZ structures were better captured by the quantitative models, highlighting the interest in considering seizure-related EEG fluctuations across the entire spectrum. PMID:27114897
Mapping human preictal and ictal haemodynamic networks using simultaneous intracranial EEG-fMRI.
Chaudhary, Umair J; Centeno, Maria; Thornton, Rachel C; Rodionov, Roman; Vulliemoz, Serge; McEvoy, Andrew W; Diehl, Beate; Walker, Matthew C; Duncan, John S; Carmichael, David W; Lemieux, Louis
2016-01-01
Accurately characterising the brain networks involved in seizure activity may have important implications for our understanding of epilepsy. Intracranial EEG-fMRI can be used to capture focal epileptic events in humans with exquisite electrophysiological sensitivity and allows for identification of brain structures involved in this phenomenon over the entire brain. We investigated ictal BOLD networks using the simultaneous intracranial EEG-fMRI (icEEG-fMRI) in a 30 year-old male undergoing invasive presurgical evaluation with bilateral depth electrode implantations in amygdalae and hippocampi for refractory temporal lobe epilepsy. One spontaneous focal electrographic seizure was recorded. The aims of the data analysis were firstly to map BOLD changes related to the ictal activity identified on icEEG and secondly to compare different fMRI modelling approaches. Visual inspection of the icEEG showed an onset dominated by beta activity involving the right amygdala and hippocampus lasting 6.4 s (ictal onset phase), followed by gamma activity bilaterally lasting 14.8 s (late ictal phase). The fMRI data was analysed using SPM8 using two modelling approaches: firstly, purely based on the visually identified phases of the seizure and secondly, based on EEG spectral dynamics quantification. For the visual approach the two ictal phases were modelled as 'ON' blocks convolved with the haemodynamic response function; in addition the BOLD changes during the 30 s preceding the onset were modelled using a flexible basis set. For the quantitative fMRI modelling approach two models were evaluated: one consisting of the variations in beta and gamma bands power, thereby adding a quantitative element to the visually-derived models, and another based on principal components analysis of the entire spectrogram in attempt to reduce the bias associated with the visual appreciation of the icEEG. BOLD changes related to the visually defined ictal onset phase were revealed in the medial and lateral right temporal lobe. For the late ictal phase, the BOLD changes were remote from the SOZ and in deep brain areas (precuneus, posterior cingulate and others). The two quantitative models revealed BOLD changes involving the right hippocampus, amygdala and fusiform gyrus and in remote deep brain structures and the default mode network-related areas. In conclusion, icEEG-fMRI allowed us to reveal BOLD changes within and beyond the SOZ linked to very localised ictal fluctuations in beta and gamma activity measured in the amygdala and hippocampus. Furthermore, the BOLD changes within the SOZ structures were better captured by the quantitative models, highlighting the interest in considering seizure-related EEG fluctuations across the entire spectrum.
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
van Eijk, Ruben P A; van der Zwan, Albert; Bleys, Ronald L A W; Regli, Luca; Esposito, Giuseppe
2015-12-01
Postmortem CT angiography is a common procedure used to visualize the entire human vasculature. For visualization of a specific organ's vascular anatomy, casting is the preferred method. Because of the permanent and damaging nature of casting, the organ cannot be further used as an experimental model after angiography. Therefore, there is a need for a minimally traumatic method to visualize organ-specific vascular anatomy. The purpose of this study was to develop and evaluate a contrast enhancement technique that is capable of visualizing the intracranial vascular anatomy while preserving the anatomic integrity in cadaver heads. Seven human heads were used in this study. Heads were prepared by cannulating the vertebral and internal carotid arteries. Contrast agent was injected as a mixture of tap water, polyethylene glycol 600, and an iodinated contrast agent. Postmortem imaging was executed on a 64-MDCT scanner. Primary image review and 3D reconstruction were performed on a CT workstation. Clear visualization of the major cerebral arteries and smaller intracranial branches was achieved. Adequate visualization was obtained for both the anterior and posterior intracranial circulation. The minimally traumatic angiography method preserved the vascular integrity of the cadaver heads. A novel application of postmortem CT angiography is presented here. The technique can be used for radiologic evaluation of the intracranial circulation in cadaver heads. After CT angiography, the specimen can be used for further experimental or laboratory testing and teaching purposes.
Attention modulates spatial priority maps in the human occipital, parietal and frontal cortices
Sprague, Thomas C.; Serences, John T.
2014-01-01
Computational theories propose that attention modulates the topographical landscape of spatial ‘priority’ maps in regions of visual cortex so that the location of an important object is associated with higher activation levels. While single-unit recording studies have demonstrated attention-related increases in the gain of neural responses and changes in the size of spatial receptive fields, the net effect of these modulations on the topography of region-level priority maps has not been investigated. Here, we used fMRI and a multivariate encoding model to reconstruct spatial representations of attended and ignored stimuli using activation patterns across entire visual areas. These reconstructed spatial representations reveal the influence of attention on the amplitude and size of stimulus representations within putative priority maps across the visual hierarchy. Our results suggest that attention increases the amplitude of stimulus representations in these spatial maps, particularly in higher visual areas, but does not substantively change their size. PMID:24212672
Solano-Román, Antonio; Alfaro-Arias, Verónica; Cruz-Castillo, Carlos; Orozco-Solano, Allan
2018-03-15
VizGVar was designed to meet the growing need of the research community for improved genomic and proteomic data viewers that benefit from better information visualization. We implemented a new information architecture and applied user centered design principles to provide a new improved way of visualizing genetic information and protein data related to human disease. VizGVar connects the entire database of Ensembl protein motifs, domains, genes and exons with annotated SNPs and somatic variations from PharmGKB and COSMIC. VizGVar precisely represents genetic variations and their respective location by colored curves to designate different types of variations. The structured hierarchy of biological data is reflected in aggregated patterns through different levels, integrating several layers of information at once. VizGVar provides a new interactive, web-based JavaScript visualization of somatic mutations and protein variation, enabling fast and easy discovery of clinically relevant variation patterns. VizGVar is accessible at http://vizport.io/vizgvar; http://vizport.io/vizgvar/doc/. asolano@broadinstitute.org or allan.orozcosolano@ucr.ac.cr.
Perisaccadic Receptive Field Expansion in the Lateral Intraparietal Area.
Wang, Xiaolan; Fung, C C Alan; Guan, Shaobo; Wu, Si; Goldberg, Michael E; Zhang, Mingsha
2016-04-20
Humans and monkeys have access to an accurate representation of visual space despite a constantly moving eye. One mechanism by which the brain accomplishes this is by remapping visual receptive fields around the time of a saccade. In this process a neuron can be excited by a probe stimulus in the current receptive field, and also simultaneously by a probe stimulus in the location that will be brought into the neuron's receptive field by the saccade (the future receptive field), even before saccade begins. Here we show that perisaccadic neuronal excitability is not limited to the current and future receptive fields but encompasses the entire region of visual space across which the current receptive field will be swept by the saccade. A computational model shows that this receptive field expansion is consistent with the propagation of a wave of activity across the cerebral cortex as saccade planning and remapping proceed. Copyright © 2016 Elsevier Inc. All rights reserved.
Beta oscillations define discrete perceptual cycles in the somatosensory domain.
Baumgarten, Thomas J; Schnitzler, Alfons; Lange, Joachim
2015-09-29
Whether seeing a movie, listening to a song, or feeling a breeze on the skin, we coherently experience these stimuli as continuous, seamless percepts. However, there are rare perceptual phenomena that argue against continuous perception but, instead, suggest discrete processing of sensory input. Empirical evidence supporting such a discrete mechanism, however, remains scarce and comes entirely from the visual domain. Here, we demonstrate compelling evidence for discrete perceptual sampling in the somatosensory domain. Using magnetoencephalography (MEG) and a tactile temporal discrimination task in humans, we find that oscillatory alpha- and low beta-band (8-20 Hz) cycles in primary somatosensory cortex represent neurophysiological correlates of discrete perceptual cycles. Our results agree with several theoretical concepts of discrete perceptual sampling and empirical evidence of perceptual cycles in the visual domain. Critically, these results show that discrete perceptual cycles are not domain-specific, and thus restricted to the visual domain, but extend to the somatosensory domain.
Verticality perception during and after galvanic vestibular stimulation.
Volkening, Katharina; Bergmann, Jeannine; Keller, Ingo; Wuehr, Max; Müller, Friedemann; Jahn, Klaus
2014-10-03
The human brain constructs verticality perception by integrating vestibular, somatosensory, and visual information. Here we investigated whether galvanic vestibular stimulation (GVS) has an effect on verticality perception both during and after application, by assessing the subjective verticals (visual, haptic and postural) in healthy subjects at those times. During stimulation the subjective visual vertical and the subjective haptic vertical shifted towards the anode, whereas this shift was reversed towards the cathode in all modalities once stimulation was turned off. Overall, the effects were strongest for the haptic modality. Additional investigation of the time course of GVS-induced changes in the haptic vertical revealed that anodal shifts persisted for the entire 20-min stimulation interval in the majority of subjects. Aftereffects exhibited different types of decay, with a preponderance for an exponential decay. The existence of such reverse effects after stimulation could have implications for GVS-based therapy. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Vividness of Visual Imagery Depends on the Neural Overlap with Perception in Visual Areas.
Dijkstra, Nadine; Bosch, Sander E; van Gerven, Marcel A J
2017-02-01
Research into the neural correlates of individual differences in imagery vividness point to an important role of the early visual cortex. However, there is also great fluctuation of vividness within individuals, such that only looking at differences between people necessarily obscures the picture. In this study, we show that variation in moment-to-moment experienced vividness of visual imagery, within human subjects, depends on the activity of a large network of brain areas, including frontal, parietal, and visual areas. Furthermore, using a novel multivariate analysis technique, we show that the neural overlap between imagery and perception in the entire visual system correlates with experienced imagery vividness. This shows that the neural basis of imagery vividness is much more complicated than studies of individual differences seemed to suggest. Visual imagery is the ability to visualize objects that are not in our direct line of sight: something that is important for memory, spatial reasoning, and many other tasks. It is known that the better people are at visual imagery, the better they can perform these tasks. However, the neural correlates of moment-to-moment variation in visual imagery remain unclear. In this study, we show that the more the neural response during imagery is similar to the neural response during perception, the more vivid or perception-like the imagery experience is. Copyright © 2017 the authors 0270-6474/17/371367-07$15.00/0.
Feature-Selective Attentional Modulations in Human Frontoparietal Cortex.
Ester, Edward F; Sutterer, David W; Serences, John T; Awh, Edward
2016-08-03
Control over visual selection has long been framed in terms of a dichotomy between "source" and "site," where top-down feedback signals originating in frontoparietal cortical areas modulate or bias sensory processing in posterior visual areas. This distinction is motivated in part by observations that frontoparietal cortical areas encode task-level variables (e.g., what stimulus is currently relevant or what motor outputs are appropriate), while posterior sensory areas encode continuous or analog feature representations. Here, we present evidence that challenges this distinction. We used fMRI, a roving searchlight analysis, and an inverted encoding model to examine representations of an elementary feature property (orientation) across the entire human cortical sheet while participants attended either the orientation or luminance of a peripheral grating. Orientation-selective representations were present in a multitude of visual, parietal, and prefrontal cortical areas, including portions of the medial occipital cortex, the lateral parietal cortex, and the superior precentral sulcus (thought to contain the human homolog of the macaque frontal eye fields). Additionally, representations in many-but not all-of these regions were stronger when participants were instructed to attend orientation relative to luminance. Collectively, these findings challenge models that posit a strict segregation between sources and sites of attentional control on the basis of representational properties by demonstrating that simple feature values are encoded by cortical regions throughout the visual processing hierarchy, and that representations in many of these areas are modulated by attention. Influential models of visual attention posit a distinction between top-down control and bottom-up sensory processing networks. These models are motivated in part by demonstrations showing that frontoparietal cortical areas associated with top-down control represent abstract or categorical stimulus information, while visual areas encode parametric feature information. Here, we show that multivariate activity in human visual, parietal, and frontal cortical areas encode representations of a simple feature property (orientation). Moreover, representations in several (though not all) of these areas were modulated by feature-based attention in a similar fashion. These results provide an important challenge to models that posit dissociable top-down control and sensory processing networks on the basis of representational properties. Copyright © 2016 the authors 0270-6474/16/368188-12$15.00/0.
An interactive framework for acquiring vision models of 3-D objects from 2-D images.
Motai, Yuichi; Kak, Avinash
2004-02-01
This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.
Cross-orientation suppression in human visual cortex
Heeger, David J.
2011-01-01
Cross-orientation suppression was measured in human primary visual cortex (V1) to test the normalization model. Subjects viewed vertical target gratings (of varying contrasts) with or without a superimposed horizontal mask grating (fixed contrast). We used functional magnetic resonance imaging (fMRI) to measure the activity in each of several hypothetical channels (corresponding to subpopulations of neurons) with different orientation tunings and fit these orientation-selective responses with the normalization model. For the V1 channel maximally tuned to the target orientation, responses increased with target contrast but were suppressed when the horizontal mask was added, evident as a shift in the contrast gain of this channel's responses. For the channel maximally tuned to the mask orientation, a constant baseline response was evoked for all target contrasts when the mask was absent; responses decreased with increasing target contrast when the mask was present. The normalization model provided a good fit to the contrast-response functions with and without the mask. In a control experiment, the target and mask presentations were temporally interleaved, and we found no shift in contrast gain, i.e., no evidence for suppression. We conclude that the normalization model can explain cross-orientation suppression in human visual cortex. The approach adopted here can be applied broadly to infer, simultaneously, the responses of several subpopulations of neurons in the human brain that span particular stimulus or feature spaces, and characterize their interactions. In addition, it allows us to investigate how stimuli are represented by the inferred activity of entire neural populations. PMID:21775720
Activity in human visual and parietal cortex reveals object-based attention in working memory.
Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph
2015-02-25
Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.
DEVELOPMENT AND APPLICAIONS OF A STANDARD VISUAL INDEX
A standard visual index appropriate for characterizing visibility through uniform hazes, is defined in terms of either of the traditional metrics: visual range or extinction coefficient. This index was designed to be linear with respect to perceived visual changes over its entire...
Human-scale interaction for virtual model displays: a clear case for real tools
NASA Astrophysics Data System (ADS)
Williams, George C.; McDowall, Ian E.; Bolas, Mark T.
1998-04-01
We describe a hand-held user interface for interacting with virtual environments displayed on a Virtual Model Display. The tool, constructed entirely of transparent materials, is see-through. We render a graphical counterpart of the tool on the display and map it one-to-one with the real tool. This feature, combined with a capability for touch- sensitive, discrete input, results in a useful spatial input device that is visually versatile. We discuss the tool's design and interaction techniques it supports. Briefly, we look at the human factors issues and engineering challenges presented by this tool and, in general, by the class of hand-held user interfaces that are see-through.
Stereoscopic visual fatigue assessment and modeling
NASA Astrophysics Data System (ADS)
Wang, Danli; Wang, Tingting; Gong, Yue
2014-03-01
Evaluation of stereoscopic visual fatigue is one of the focuses in the user experience research. It is measured in either subjective or objective methods. Objective measures are more preferred for their capability to quantify the degree of human visual fatigue without being affected by individual variation. However, little research has been conducted on the integration of objective indicators, or the sensibility of each objective indicator in reflecting subjective fatigue. The paper proposes a simply effective method to evaluate visual fatigue more objectively. The stereoscopic viewing process is divided into series of sessions, after each of which viewers rate their visual fatigue with subjective scores (SS) according to a five-grading scale, followed by tests of the punctum maximum accommodation (PMA) and visual reaction time (VRT). Throughout the entire viewing process, their eye movements are recorded by an infrared camera. The pupil size (PS) and percentage of eyelid closure over the pupil over time (PERCLOS) are extracted from the videos processed by the algorithm. Based on the method, an experiment with 14 subjects was conducted to assess visual fatigue induced by 3D images on polarized 3D display. The experiment consisted of 10 sessions (5min per session), each containing the same 75 images displayed randomly. The results show that PMA, VRT and PERCLOS are the most efficient indicators of subjective visual fatigue and finally a predictive model is derived from the stepwise multiple regressions.
Kraft, Andrew W.; Mitra, Anish; Bauer, Adam Q.; Raichle, Marcus E.; Culver, Joseph P.; Lee, Jin-Moo
2017-01-01
Decades of work in experimental animals has established the importance of visual experience during critical periods for the development of normal sensory-evoked responses in the visual cortex. However, much less is known concerning the impact of early visual experience on the systems-level organization of spontaneous activity. Human resting-state fMRI has revealed that infraslow fluctuations in spontaneous activity are organized into stereotyped spatiotemporal patterns across the entire brain. Furthermore, the organization of spontaneous infraslow activity (ISA) is plastic in that it can be modulated by learning and experience, suggesting heightened sensitivity to change during critical periods. Here we used wide-field optical intrinsic signal imaging in mice to examine whole-cortex spontaneous ISA patterns. Using monocular or binocular visual deprivation, we examined the effects of critical period visual experience on the development of ISA correlation and latency patterns within and across cortical resting-state networks. Visual modification with monocular lid suturing reduced correlation between left and right cortices (homotopic correlation) within the visual network, but had little effect on internetwork correlation. In contrast, visual deprivation with binocular lid suturing resulted in increased visual homotopic correlation and increased anti-correlation between the visual network and several extravisual networks, suggesting cross-modal plasticity. These network-level changes were markedly attenuated in mice with genetic deletion of Arc, a gene known to be critical for activity-dependent synaptic plasticity. Taken together, our results suggest that critical period visual experience induces global changes in spontaneous ISA relationships, both within the visual network and across networks, through an Arc-dependent mechanism. PMID:29087327
Comparing capacity coefficient and dual task assessment of visual multitasking workload
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaha, Leslie M.
Capacity coefficient analysis could offer a theoretically grounded alternative approach to subjective measures and dual task assessment of cognitive workload. Workload capacity or workload efficiency is a human information processing modeling construct defined as the amount of information that can be processed by the visual cognitive system given a specified of amount of time. In this paper, I explore the relationship between capacity coefficient analysis of workload efficiency and dual task response time measures. To capture multitasking performance, I examine how the relatively simple assumptions underlying the capacity construct generalize beyond the single visual decision making tasks. The fundamental toolsmore » for measuring workload efficiency are the integrated hazard and reverse hazard functions of response times, which are defined by log transforms of the response time distribution. These functions are used in the capacity coefficient analysis to provide a functional assessment of the amount of work completed by the cognitive system over the entire range of response times. For the study of visual multitasking, capacity coefficient analysis enables a comparison of visual information throughput as the number of tasks increases from one to two to any number of simultaneous tasks. I illustrate the use of capacity coefficients for visual multitasking on sample data from dynamic multitasking in the modified Multi-attribute Task Battery.« less
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location
Kanwisher, Nancy
2012-01-01
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434
Coarse-Scale Biases for Spirals and Orientation in Human Visual Cortex
Heeger, David J.
2013-01-01
Multivariate decoding analyses are widely applied to functional magnetic resonance imaging (fMRI) data, but there is controversy over their interpretation. Orientation decoding in primary visual cortex (V1) reflects coarse-scale biases, including an over-representation of radial orientations. But fMRI responses to clockwise and counter-clockwise spirals can also be decoded. Because these stimuli are matched for radial orientation, while differing in local orientation, it has been argued that fine-scale columnar selectivity for orientation contributes to orientation decoding. We measured fMRI responses in human V1 to both oriented gratings and spirals. Responses to oriented gratings exhibited a complex topography, including a radial bias that was most pronounced in the peripheral representation, and a near-vertical bias that was most pronounced near the foveal representation. Responses to clockwise and counter-clockwise spirals also exhibited coarse-scale organization, at the scale of entire visual quadrants. The preference of each voxel for clockwise or counter-clockwise spirals was predicted from the preferences of that voxel for orientation and spatial position (i.e., within the retinotopic map). Our results demonstrate a bias for local stimulus orientation that has a coarse spatial scale, is robust across stimulus classes (spirals and gratings), and suffices to explain decoding from fMRI responses in V1. PMID:24336733
Durbin, Kenneth R.; Tran, John C.; Zamdborg, Leonid; Sweet, Steve M. M.; Catherman, Adam D.; Lee, Ji Eun; Li, Mingxi; Kellie, John F.; Kelleher, Neil L.
2011-01-01
Applying high-throughput Top-Down MS to an entire proteome requires a yet-to-be-established model for data processing. Since Top-Down is becoming possible on a large scale, we report our latest software pipeline dedicated to capturing the full value of intact protein data in automated fashion. For intact mass detection, we combine algorithms for processing MS1 data from both isotopically resolved (FT) and charge-state resolved (ion trap) LC-MS data, which are then linked to their fragment ions for database searching using ProSight. Automated determination of human keratin and tubulin isoforms is one result. Optimized for the intricacies of whole proteins, new software modules visualize proteome-scale data based on the LC retention time and intensity of intact masses and enable selective detection of PTMs to automatically screen for acetylation, phosphorylation, and methylation. Software functionality was demonstrated using comparative LC-MS data from yeast strains in addition to human cells undergoing chemical stress. We further these advances as a key aspect of realizing Top-Down MS on a proteomic scale. PMID:20848673
When May a Child Who Is Visually Impaired Recognize a Face?
ERIC Educational Resources Information Center
Markham, R.; Wyver, S.
1996-01-01
The ability of 16 school-age children with visual impairments and their sighted peers to recognize faces was compared. Although no intergroup differences were found in ability to identify entire faces, the visually impaired children were at a disadvantage when part of the face, especially the eyes, was not visible. Degree of visual acuity also…
ENGINES: exploring single nucleotide variation in entire human genomes.
Amigo, Jorge; Salas, Antonio; Phillips, Christopher
2011-04-19
Next generation ultra-sequencing technologies are starting to produce extensive quantities of data from entire human genome or exome sequences, and therefore new software is needed to present and analyse this vast amount of information. The 1000 Genomes project has recently released raw data for 629 complete genomes representing several human populations through their Phase I interim analysis and, although there are certain public tools available that allow exploration of these genomes, to date there is no tool that permits comprehensive population analysis of the variation catalogued by such data. We have developed a genetic variant site explorer able to retrieve data for Single Nucleotide Variation (SNVs), population by population, from entire genomes without compromising future scalability and agility. ENGINES (ENtire Genome INterface for Exploring SNVs) uses data from the 1000 Genomes Phase I to demonstrate its capacity to handle large amounts of genetic variation (>7.3 billion genotypes and 28 million SNVs), as well as deriving summary statistics of interest for medical and population genetics applications. The whole dataset is pre-processed and summarized into a data mart accessible through a web interface. The query system allows the combination and comparison of each available population sample, while searching by rs-number list, chromosome region, or genes of interest. Frequency and FST filters are available to further refine queries, while results can be visually compared with other large-scale Single Nucleotide Polymorphism (SNP) repositories such as HapMap or Perlegen. ENGINES is capable of accessing large-scale variation data repositories in a fast and comprehensive manner. It allows quick browsing of whole genome variation, while providing statistical information for each variant site such as allele frequency, heterozygosity or FST values for genetic differentiation. Access to the data mart generating scripts and to the web interface is granted from http://spsmart.cesga.es/engines.php. © 2011 Amigo et al; licensee BioMed Central Ltd.
Changes in brain morphology in albinism reflect reduced visual acuity.
Bridge, Holly; von dem Hagen, Elisabeth A H; Davies, George; Chambers, Claire; Gouws, Andre; Hoffmann, Michael; Morland, Antony B
2014-07-01
Albinism, in humans and many animal species, has a major impact on the visual system, leading to reduced acuity, lack of binocular function and nystagmus. In addition to the lack of a foveal pit, there is a disruption to the routing of the nerve fibers crossing at the optic chiasm, resulting in excessive crossing of fibers to the contralateral hemisphere. However, very little is known about the effect of this misrouting on the structure of the post-chiasmatic visual pathway, and the occipital lobes in particular. Whole-brain analyses of cortical thickness in a large cohort of subjects with albinism showed an increase in cortical thickness, relative to control subjects, particularly in posterior V1, corresponding to the foveal representation. Furthermore, mean cortical thickness across entire V1 was significantly greater in these subjects compared to controls and negatively correlated with visual acuity in albinism. Additionally, the group with albinism showed decreased gyrification in the left ventral occipital lobe. While the increase in cortical thickness in V1, also found in congenitally blind subjects, has been interpreted to reflect a lack of pruning, the decreased gyrification in the ventral extrastriate cortex may reflect the reduced input to the foveal regions of the ventral visual stream. Copyright © 2012 Elsevier Ltd. All rights reserved.
Social vision: sustained perceptual enhancement of affective facial cues in social anxiety
McTeague, Lisa M.; Shumen, Joshua R.; Wieser, Matthias J.; Lang, Peter J.; Keil, Andreas
2010-01-01
Heightened perception of facial cues is at the core of many theories of social behavior and its disorders. In the present study, we continuously measured electrocortical dynamics in human visual cortex, as evoked by happy, neutral, fearful, and angry faces. Thirty-seven participants endorsing high versus low generalized social anxiety (upper and lower tertiles of 2,104 screened undergraduates) viewed naturalistic faces flickering at 17.5 Hz to evoke steady-state visual evoked potentials (ssVEPs), recorded from 129 scalp electrodes. Electrophysiological data were evaluated in the time-frequency domain after linear source space projection using the minimum norm method. Source estimation indicated an early visual cortical origin of the face-evoked ssVEP, which showed sustained amplitude enhancement for emotional expressions specifically in individuals with pervasive social anxiety. Participants in the low symptom group showed no such sensitivity, and a correlational analysis across the entire sample revealed a strong relationship between self-reported interpersonal anxiety/avoidance and enhanced visual cortical response amplitude for emotional, versus neutral expressions. This pattern was maintained across the 3500 ms viewing epoch, suggesting that temporally sustained, heightened perceptual bias towards affective facial cues is associated with generalized social anxiety. PMID:20832490
Haltere mechanosensory influence on tethered flight behavior in Drosophila.
Mureli, Shwetha; Fox, Jessica L
2015-08-01
In flies, mechanosensory information from modified hindwings known as halteres is combined with visual information for wing-steering behavior. Haltere input is necessary for free flight, making it difficult to study the effects of haltere ablation under natural flight conditions. We thus used tethered Drosophila melanogaster flies to examine the relationship between halteres and the visual system, using wide-field motion or moving figures as visual stimuli. Haltere input was altered by surgically decreasing its mass, or by removing it entirely. Haltere removal does not affect the flies' ability to flap or steer their wings, but it does increase the temporal frequency at which they modify their wingbeat amplitude. Reducing the haltere mass decreases the optomotor reflex response to wide-field motion, and removing the haltere entirely does not further decrease the response. Decreasing the mass does not attenuate the response to figure motion, but removing the entire haltere does attenuate the response. When flies are allowed to control a visual stimulus in closed-loop conditions, haltereless flies fixate figures with the same acuity as intact flies, but cannot stabilize a wide-field stimulus as accurately as intact flies can. These manipulations suggest that the haltere mass is influential in wide-field stabilization, but less so in figure tracking. In both figure and wide-field experiments, we observe responses to visual motion with and without halteres, indicating that during tethered flight, intact halteres are not strictly necessary for visually guided wing-steering responses. However, the haltere feedback loop may operate in a context-dependent way to modulate responses to visual motion. © 2015. Published by The Company of Biologists Ltd.
VISUAL TRAINING AND READING PERFORMANCE.
ERIC Educational Resources Information Center
ANAPOLLE, LOUIS
VISUAL TRAINING IS DEFINED AS THE FIELD OF OCULAR REEDUCATION AND REHABILITATION OF THE VARIOUS VISUAL SKILLS THAT ARE OF PARAMOUNT IMPORTANCE TO SCHOOL ACHIEVEMENT, AUTOMOBILE DRIVING, OUTDOOR SPORTS ACTIVITIES, AND OCCUPATIONAL PURSUITS. A HISTORY OF ORTHOPTICS, THE SUGGESTED NAME FOR THE ENTIRE FIELD OF OCULAR REEDUCATION, IS GIVEN. READING AS…
First human-caused extinction of a cetacean species?
Turvey, Samuel T; Pitman, Robert L; Taylor, Barbara L; Barlow, Jay; Akamatsu, Tomonari; Barrett, Leigh A; Zhao, Xiujiang; Reeves, Randall R; Stewart, Brent S; Wang, Kexiong; Wei, Zhuo; Zhang, Xianfeng; Pusser, L T; Richlen, Michael; Brandon, John R; Wang, Ding
2007-10-22
The Yangtze River dolphin or baiji (Lipotes vexillifer), an obligate freshwater odontocete known only from the middle-lower Yangtze River system and neighbouring Qiantang River in eastern China, has long been recognized as one of the world's rarest and most threatened mammal species. The status of the baiji has not been investigated since the late 1990s, when the surviving population was estimated to be as low as 13 individuals. An intensive six-week multi-vessel visual and acoustic survey carried out in November-December 2006, covering the entire historical range of the baiji in the main Yangtze channel, failed to find any evidence that the species survives. We are forced to conclude that the baiji is now likely to be extinct, probably due to unsustainable by-catch in local fisheries. This represents the first global extinction of a large vertebrate for over 50 years, only the fourth disappearance of an entire mammal family since AD 1500, and the first cetacean species to be driven to extinction by human activity. Immediate and extreme measures may be necessary to prevent the extinction of other endangered cetaceans, including the sympatric Yangtze finless porpoise (Neophocaena phocaenoides asiaeorientalis).
Herculano-Houzel, Suzana; Watson, Charles; Paxinos, George
2013-01-01
How are neurons distributed along the cortical surface and across functional areas? Here we use the isotropic fractionator (Herculano-Houzel and Lent, 2005) to analyze the distribution of neurons across the entire isocortex of the mouse, divided into 18 functional areas defined anatomically. We find that the number of neurons underneath a surface area (the N/A ratio) varies 4.5-fold across functional areas and neuronal density varies 3.2-fold. The face area of S1 contains the most neurons, followed by motor cortex and the primary visual cortex. Remarkably, while the distribution of neurons across functional areas does not accompany the distribution of surface area, it mirrors closely the distribution of cortical volumes—with the exception of the visual areas, which hold more neurons than expected for their volume. Across the non-visual cortex, the volume of individual functional areas is a shared linear function of their number of neurons, while in the visual areas, neuronal densities are much higher than in all other areas. In contrast, the 18 functional areas cluster into three different zones according to the relationship between the N/A ratio and cortical thickness and neuronal density: these three clusters can be called visual, sensory, and, possibly, associative. These findings are remarkably similar to those in the human cerebral cortex (Ribeiro et al., 2013) and suggest that, like the human cerebral cortex, the mouse cerebral cortex comprises two zones that differ in how neurons form the cortical volume, and three zones that differ in how neurons are distributed underneath the cortical surface, possibly in relation to local differences in connectivity through the white matter. Our results suggest that beyond the developmental divide into visual and non-visual cortex, functional areas initially share a common distribution of neurons along the parenchyma that become delimited into functional areas according to the pattern of connectivity established later. PMID:24155697
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
Food's visually perceived fat content affects discrimination speed in an orthogonal spatial task.
Harrar, Vanessa; Toepel, Ulrike; Murray, Micah M; Spence, Charles
2011-10-01
Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.
THE ROLE OF VISUALS IN VERBAL LEARNING--STUDIES IN TELEVISED INSTRUCTION, REPORT 3, SUMMARY REPORT.
ERIC Educational Resources Information Center
GROPPER, GEORGE L.
THE INTEGRATION OF WORDS AND PICTURES IN THE TWO STUDIES REPORTED IN THIS VOLUME WAS ACCOMPLISHED UNCONVENTIONALLY. IN ONE STUDY, AN ENTIRE TOPIC, ARCHIMEDES' LAW, WAS COVERED IN A SELF-CONTAINED, ENTIRELY PICTORIAL LESSON AND ALSO IN A SELF-CONTAINED, ENTIRELY VERBAL LESSON. STUDENTS ACQUIRED ALL THE CONCEPTS AND PRINCIPLES MAKING UP ARCHIMEDES'…
[3D visualization and analysis of vocal fold dynamics].
Bohr, C; Döllinger, M; Kniesburges, S; Traxdorf, M
2016-04-01
Visual investigation methods of the larynx mainly allow for the two-dimensional presentation of the three-dimensional structures of the vocal fold dynamics. The vertical component of the vocal fold dynamics is often neglected, yielding a loss of information. The latest studies show that the vertical dynamic components are in the range of the medio-lateral dynamics and play a significant role within the phonation process. This work presents a method for future 3D reconstruction and visualization of endoscopically recorded vocal fold dynamics. The setup contains a high-speed camera (HSC) and a laser projection system (LPS). The LPS projects a regular grid on the vocal fold surfaces and in combination with the HSC allows a three-dimensional reconstruction of the vocal fold surface. Hence, quantitative information on displacements and velocities can be provided. The applicability of the method is presented for one ex-vivo human larynx, one ex-vivo porcine larynx and one synthetic silicone larynx. The setup introduced allows the reconstruction of the entire visible vocal fold surfaces for each oscillation status. This enables a detailed analysis of the three dimensional dynamics (i. e. displacements, velocities, accelerations) of the vocal folds. The next goal is the miniaturization of the LPS to allow clinical in-vivo analysis in humans. We anticipate new insight on dependencies between 3D dynamic behavior and the quality of the acoustic outcome for healthy and disordered phonation.
Hierarchical imaging of the human knee
NASA Astrophysics Data System (ADS)
Schulz, Georg; Götz, Christian; Deyhle, Hans; Müller-Gerbl, Magdalena; Zanette, Irene; Zdora, Marie-Christine; Khimchenko, Anna; Thalmann, Peter; Rack, Alexander; Müller, Bert
2016-10-01
Among the clinically relevant imaging techniques, computed tomography (CT) reaches the best spatial resolution. Sub-millimeter voxel sizes are regularly obtained. For investigations on true micrometer level lab-based μCT has become gold standard. The aim of the present study is the hierarchical investigation of a human knee post mortem using hard X-ray μCT. After the visualization of the entire knee using a clinical CT with a spatial resolution on the sub-millimeter range, a hierarchical imaging study was performed using a laboratory μCT system nanotom m. Due to the size of the whole knee the pixel length could not be reduced below 65 μm. These first two data sets were directly compared after a rigid registration using a cross-correlation algorithm. The μCT data set allowed an investigation of the trabecular structures of the bones. The further reduction of the pixel length down to 25 μm could be achieved by removing the skin and soft tissues and measuring the tibia and the femur separately. True micrometer resolution could be achieved after extracting cylinders of several millimeters diameters from the two bones. The high resolution scans revealed the mineralized cartilage zone including the tide mark line as well as individual calcified chondrocytes. The visualization of soft tissues including cartilage, was arranged by X-ray grating interferometry (XGI) at ESRF and Diamond Light Source. Whereas the high-energy measurements at ESRF allowed the simultaneous visualization of soft and hard tissues, the low-energy results from Diamond Light Source made individual chondrocytes within the cartilage visual.
ERIC Educational Resources Information Center
Howley, Sarah A.; Prasad, Sarah E.; Pender, Niall P.; Murphy, Kieran C.
2012-01-01
22q11.2 Deletion Syndrome (22q11DS) is a common microdeletion disorder associated with mild to moderate intellectual disability and specific neurocognitive deficits, particularly in visual-motor and attentional abilities. Currently there is evidence that the visual-motor profile of 22q11DS is not entirely mediated by intellectual disability and…
Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi
2013-01-01
Lateralization is mostly analyzed for single traits, but seldom for two or more traits while performing a given task (e.g. object manipulation). We examined lateralization in eye use and in body motion that co-occur during avoidance behaviour of the common chameleon, Chamaeleo chameleon. A chameleon facing a moving threat smoothly repositions its body on the side of its perch distal to the threat, to minimize its visual exposure. We previously demonstrated that during the response (i) eye use and body motion were, each, lateralized at the tested group level (N = 26), (ii) in body motion, we observed two similar-sized sub-groups, one exhibiting a greater reduction in body exposure to threat approaching from the left and one – to threat approaching from the right (left- and right-biased subgroups), (iii) the left-biased sub-group exhibited weak lateralization of body exposure under binocular threat viewing and none under monocular viewing while the right-biased sub-group exhibited strong lateralization under both monocular and binocular threat viewing. In avoidance, how is eye use related to body motion at the entire group and at the sub-group levels? We demonstrate that (i) in the left-biased sub-group, eye use is not lateralized, (ii) in the right-biased sub-group, eye use is lateralized under binocular, but not monocular viewing of the threat, (iii) the dominance of the right-biased sub-group determines the lateralization of the entire group tested. We conclude that in chameleons, patterns of lateralization of visual function and body motion are inter-related at a subtle level. Presently, the patterns cannot be compared with humans' or related to the unique visual system of chameleons, with highly independent eye movements, complete optic nerve decussation and relatively few inter-hemispheric commissures. We present a model to explain the possible inter-hemispheric differences in dominance in chameleons' visual control of body motion during avoidance. PMID:23967099
Lustig, Avichai; Ketter-Katz, Hadas; Katzir, Gadi
2013-01-01
Lateralization is mostly analyzed for single traits, but seldom for two or more traits while performing a given task (e.g. object manipulation). We examined lateralization in eye use and in body motion that co-occur during avoidance behaviour of the common chameleon, Chamaeleo chameleon. A chameleon facing a moving threat smoothly repositions its body on the side of its perch distal to the threat, to minimize its visual exposure. We previously demonstrated that during the response (i) eye use and body motion were, each, lateralized at the tested group level (N = 26), (ii) in body motion, we observed two similar-sized sub-groups, one exhibiting a greater reduction in body exposure to threat approaching from the left and one--to threat approaching from the right (left- and right-biased subgroups), (iii) the left-biased sub-group exhibited weak lateralization of body exposure under binocular threat viewing and none under monocular viewing while the right-biased sub-group exhibited strong lateralization under both monocular and binocular threat viewing. In avoidance, how is eye use related to body motion at the entire group and at the sub-group levels? We demonstrate that (i) in the left-biased sub-group, eye use is not lateralized, (ii) in the right-biased sub-group, eye use is lateralized under binocular, but not monocular viewing of the threat, (iii) the dominance of the right-biased sub-group determines the lateralization of the entire group tested. We conclude that in chameleons, patterns of lateralization of visual function and body motion are inter-related at a subtle level. Presently, the patterns cannot be compared with humans' or related to the unique visual system of chameleons, with highly independent eye movements, complete optic nerve decussation and relatively few inter-hemispheric commissures. We present a model to explain the possible inter-hemispheric differences in dominance in chameleons' visual control of body motion during avoidance.
Motor effects from visually induced disorientation in man.
DOT National Transportation Integrated Search
1969-11-01
The problem of disorientation in a moving optical environment was examined. Egocentric disorientation can be experienced by a pilot if the entire visual environment moves relative to his body without a clue of the objective position of the airplane i...
Boström, Jan; Elger, Christian E.; Mormann, Florian
2016-01-01
Recording extracellulary from neurons in the brains of animals in vivo is among the most established experimental techniques in neuroscience, and has recently become feasible in humans. Many interesting scientific questions can be addressed only when extracellular recordings last several hours, and when individual neurons are tracked throughout the entire recording. Such questions regard, for example, neuronal mechanisms of learning and memory consolidation, and the generation of epileptic seizures. Several difficulties have so far limited the use of extracellular multi-hour recordings in neuroscience: Datasets become huge, and data are necessarily noisy in clinical recording environments. No methods for spike sorting of such recordings have been available. Spike sorting refers to the process of identifying the contributions of several neurons to the signal recorded in one electrode. To overcome these difficulties, we developed Combinato: a complete data-analysis framework for spike sorting in noisy recordings lasting twelve hours or more. Our framework includes software for artifact rejection, automatic spike sorting, manual optimization, and efficient visualization of results. Our completely automatic framework excels at two tasks: It outperforms existing methods when tested on simulated and real data, and it enables researchers to analyze multi-hour recordings. We evaluated our methods on both short and multi-hour simulated datasets. To evaluate the performance of our methods in an actual neuroscientific experiment, we used data from from neurosurgical patients, recorded in order to identify visually responsive neurons in the medial temporal lobe. These neurons responded to the semantic content, rather than to visual features, of a given stimulus. To test our methods with multi-hour recordings, we made use of neurons in the human medial temporal lobe that respond selectively to the same stimulus in the evening and next morning. PMID:27930664
Dynamic elasticity measurement for prosthetic socket design.
Kim, Yujin; Kim, Junghoon; Son, Hyeryon; Choi, Youngjin
2017-07-01
The paper proposes a novel apparatus to measure the dynamic elasticity of human limb in order to help the design and fabrication of the personalized prosthetic socket. To take measurements of the dynamic elasticity, the desired force generated as an exponential chirp signal in which the frequency increases and amplitude is maintained according to time progress is applied to human limb and then the skin deformation is recorded, ultimately, to obtain the frequency response of its elasticity. It is referred to as a Dynamic Elasticity Measurement Apparatus (DEMA) in the paper. It has three core components such as linear motor to provide the desired force, loadcell to implement the force feedback control, and potentiometer to record the skin deformation. After measuring the force/deformation and calculating the dynamic elasticity of the limb, it is visualized as 3D color map model of the limb so that the entire dynamic elasticity can be shown at a glance according to the locations and frequencies. For the visualization, the dynamic elasticities measured at specific locations and frequencies are embodied using the color map into 3D limb model acquired by using 3D scanner. To demonstrate the effectiveness, the visualized dynamic elasticities are suggested as outcome of the proposed system, although we do not have any opportunity to apply the proposed system to the amputees. Ultimately, it is expected that the proposed system can be utilized to design and fabricate the personalized prosthetic socket in order for releasing the wearing pain caused by the conventional prosthetic socket.
Gage, Julia C; Rodriguez, Ana Cecilia; Schiffman, Mark; Adadevoh, Sydney; Larraondo, Manuel J Alvarez; Chumworathayi, Bandit; Lejarza, Sandra Vargas; Araya, Luis Villegas; Garcia, Francisco; Budihas, Scott R; Long, Rodney; Katki, Hormuzd A; Herrero, Rolando; Burk, Robert D; Jeronimo, Jose
2009-05-01
To estimate efficacy of a visual triage of human papillomavirus (HPV)-positive women to either immediate cryotherapy or referral if not treatable (eg, invasive cancer, large precancers). We evaluated visual triage in the HPV-positive women aged 25 to 55 years from the 10,000-woman Guanacaste Cohort Study (n = 552). Twelve Peruvian midwives and 5 international gynecologists assessed treatability by cryotherapy using digitized high-resolution cervical images taken at enrollment. The reference standard of treatability was determined by 2 lead gynecologists from the entire 7-year follow-up of the women. Women diagnosed with histologic cervical intraepithelial neoplasia grade 2 or worse or 5-year persistence of carcinogenic HPV infection were defined as needing treatment. Midwives and gynecologists judged 30.8% and 41.2% of women not treatable by cryotherapy, respectively (P < 0.01). Among 149 women needing treatment, midwives and gynecologists correctly identified 57.5% and 63.8% (P = 0.07 for difference) of 71 women judged not treatable by the lead gynecologists and 77.6% and 59.7% (P < 0.01 for difference) of 78 women judged treatable by cryotherapy. The proportion of women judged not treatable by a reviewer varied widely and ranged from 18.6% to 61.1%. Interrater agreement was poor with mean pairwise overall agreement of 71.4% and 66.3% and kappa's of 0.33 and 0.30 for midwives and gynecologists, respectively. In future "screen-and-treat" cervical cancer prevention programs using HPV testing and cryotherapy, practitioners will visually triage HPV-positive women. The suboptimal performance of visual triage suggests that screen-and-treat programs using cryotherapy might be insufficient for treating precancerous lesions. Improved, low-technology triage methods and/or improved safe and low-technology treatment options are needed.
Structural texture similarity metrics for image analysis and retrieval.
Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L
2013-07-01
We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
Circadian rhythms in healthy aging--effects downstream from the pacemaker
NASA Technical Reports Server (NTRS)
Monk, T. H.; Kupfer, D. J.
2000-01-01
Using both previously published findings and entirely new data, we present evidence in support of the argument that the circadian dysfunction of advancing age in the healthy human is primarily one of failing to transduce the circadian signal from the circadian timing system (CTS) to rhythms "downstream" from the pacemaker rather than one of failing to generate the circadian signal itself. Two downstream rhythms are considered: subjective alertness and objective performance. For subjective alertness, we show that in both normal nychthemeral (24 h routine, sleeping at night) and unmasking (36 h of constant wakeful bed rest) conditions, advancing age, especially in men, leads to flattening of subjective alertness rhythms, even when circadian temperature rhythms are relatively robust. For objective performance, an unmasking experiment involving manual dexterity, visual search, and visual vigilance tasks was used to demonstrate that the relationship between temperature and performance is strong in the young, but not in older subjects (and especially not in older men).
Pulse-encoded ultrasound imaging of the vitreous with an annular array.
Silverman, Ronald H; Ketterling, Jeffrey A; Mamou, Jonathan; Lloyd, Harriet O; Filoux, Erwan; Coleman, D Jackson
2012-01-01
The vitreous body is nearly transparent both optically and ultrasonically. Conventional 10- to 12-MHz diagnostic ultrasound can detect vitreous inhomogeneities at high gain settings, but has limited resolution and sensitivity, especially outside the fixed focal zone near the retina. To improve visualization of faint intravitreal fluid/gel interfaces, the authors fabricated a spherically curved 20-MHz five-element annular array ultrasound transducer, implemented a synthetic-focusing algorithm to extend the depth-of-field, and used a pulse-encoding strategy to increase sensitivity. The authors evaluated a human subject with a recent posterior vitreous detachment and compared the annular array with conventional 10-MHz ultrasound and spectral-domain optical coherence tomography. With synthetic focusing and chirp pulse-encoding, the array allowed visualization of the formed and fluid components of the vitreous with improved sensitivity and resolution compared with the conventional B-scan. Although optical coherence tomography allowed assessment of the posterior vitreoretinal interface, the ultrasound array allowed evaluation of the entire vitreous body. Copyright 2012, SLACK Incorporated.
Germier, Thomas; Sylvain, Audibert; Silvia, Kocanova; David, Lane; Kerstin, Bystricky
2018-06-01
Spatio-temporal organization of the cell nucleus adapts to and regulates genomic processes. Microscopy approaches that enable direct monitoring of specific chromatin sites in single cells and in real time are needed to better understand the dynamics involved. In this chapter, we describe the principle and development of ANCHOR, a novel tool for DNA labelling in eukaryotic cells. Protocols for use of ANCHOR to visualize a single genomic locus in eukaryotic cells are presented. We describe an approach for live cell imaging of a DNA locus during the entire cell cycle in human breast cancer cells. Copyright © 2018 Elsevier Inc. All rights reserved.
Vital Affordances, Occupying Niches: An Ecological Approach to Disability and Performance
ERIC Educational Resources Information Center
Dokumaci, Arseli
2017-01-01
This article proposes a new conceptual approach to disability and performance through a contribution that comes entirely from outside the disciplines; a re-theorisation of Gibson's [1979. "The Ecological Approach to Visual Perception". Hillsdale: Lawrence Erlbaum Associates] theory of affordances. Drawing on three visual ethnographies…
Basinwide Estimation of Habitat and Fish Populations in Streams
C. Andrew Dolloff; David G. Hankin; Gordon H. Reeves
1993-01-01
Basinwide visual estimation techniques (BVET) are statistically reliable and cost effective for estimating habitat and fish populations across entire watersheds. Survey teams visit habitats in every reach of the study area to record visual observations. At preselected intervals, teams also record actual measurements. These observations and measurements are used to...
UAV visual signature suppression via adaptive materials
NASA Astrophysics Data System (ADS)
Barrett, Ron; Melkert, Joris
2005-05-01
Visual signature suppression (VSS) methods for several classes of aircraft from WWII on are examined and historically summarized. This study shows that for some classes of uninhabited aerial vehicles (UAVs), primary mission threats do not stem from infrared or radar signatures, but from the amount that an aircraft visually stands out against the sky. The paper shows that such visual mismatch can often jeopardize mission success and/or induce the destruction of the entire aircraft. A psycho-physioptical study was conducted to establish the definition and benchmarks of a Visual Cross Section (VCS) for airborne objects. This study was centered on combining the effects of size, shape, color and luminosity or effective illumance (EI) of a given aircraft to arrive at a VCS. A series of tests were conducted with a 6.6ft (2m) UAV which was fitted with optically adaptive electroluminescent sheets at altitudes of up to 1000 ft (300m). It was shown that with proper tailoring of the color and luminosity, the VCS of the aircraft dropped from more than 4,200cm2 to less than 1.8cm2 at 100m (the observed lower limit of the 20-20 human eye in this study). In laypersons terms this indicated that the UAV essentially "disappeared". This study concludes with an assessment of the weight and volume impact of such a Visual Suppression System (VSS) on the UAV, showing that VCS levels on this class UAV can be suppressed to below 1.8cm2 for aircraft gross weight penalties of only 9.8%.
Beyond perceptual expertise: revisiting the neural substrates of expert object recognition
Harel, Assaf; Kravitz, Dwight; Baker, Chris I.
2013-01-01
Real-world expertise provides a valuable opportunity to understand how experience shapes human behavior and neural function. In the visual domain, the study of expert object recognition, such as in car enthusiasts or bird watchers, has produced a large, growing, and often-controversial literature. Here, we synthesize this literature, focusing primarily on results from functional brain imaging, and propose an interactive framework that incorporates the impact of high-level factors, such as attention and conceptual knowledge, in supporting expertise. This framework contrasts with the perceptual view of object expertise that has concentrated largely on stimulus-driven processing in visual cortex. One prominent version of this perceptual account has almost exclusively focused on the relation of expertise to face processing and, in terms of the neural substrates, has centered on face-selective cortical regions such as the Fusiform Face Area (FFA). We discuss the limitations of this face-centric approach as well as the more general perceptual view, and highlight that expert related activity is: (i) found throughout visual cortex, not just FFA, with a strong relationship between neural response and behavioral expertise even in the earliest stages of visual processing, (ii) found outside visual cortex in areas such as parietal and prefrontal cortices, and (iii) modulated by the attentional engagement of the observer suggesting that it is neither automatic nor driven solely by stimulus properties. These findings strongly support a framework in which object expertise emerges from extensive interactions within and between the visual system and other cognitive systems, resulting in widespread, distributed patterns of expertise-related activity across the entire cortex. PMID:24409134
Three-dimensional holographic display of ultrasound computed tomograms
NASA Astrophysics Data System (ADS)
Andre, Michael P.; Janee, Helmar S.; Ysrael, Mariana Z.; Hodler, Jeurg; Olson, Linda K.; Leopold, George R.; Schulz, Raymond
1997-05-01
Breast ultrasound is a valuable adjunct to mammography but is limited by a very small field of view, particularly with high-resolution transducers necessary for breast diagnosis. We have been developing an ultrasound system based on a diffraction tomography method that provides slices through the breast on a large 20-cm diameter circular field of view. Eight to fifteen images are typically produced in sequential coronal planes from the nipple to the chest wall with either 0.25 or 0.5 mm pixels. As a means to simplify the interpretation of this large set of images, we report experience with 3D life-sized displays of the entire breast of human volunteers using a digital holographic technique. The compound 3D holographic images are produced from the digital image matrix, recorded on 14 X 17 inch transparency and projected on a special white-light viewbox. Holographic visualization of the entire breast has proved to be the preferred method for 3D display of ultrasound computed tomography images. It provides a unique perspective on breast anatomy and may prove useful for biopsy guidance and surgical planning.
Motor Effects from Visually Induced Disorientation in Man.
ERIC Educational Resources Information Center
Brecher, M. Herbert; Brecher, Gerhard A.
The problem of disorientation in a moving optical environment was examined. A pilot can experience egocentric disorientation if the entire visual environment moves relative to his body without a clue as to the objectives position of the airplane in respect to the ground. A simple method of measuring disorientation was devised. In this method…
Magnetic resonance imaging of optic nerve
Gala, Foram
2015-01-01
Optic nerves are the second pair of cranial nerves and are unique as they represent an extension of the central nervous system. Apart from clinical and ophthalmoscopic evaluation, imaging, especially magnetic resonance imaging (MRI), plays an important role in the complete evaluation of optic nerve and the entire visual pathway. In this pictorial essay, the authors describe segmental anatomy of the optic nerve and review the imaging findings of various conditions affecting the optic nerves. MRI allows excellent depiction of the intricate anatomy of optic nerves due to its excellent soft tissue contrast without exposure to ionizing radiation, better delineation of the entire visual pathway, and accurate evaluation of associated intracranial pathologies. PMID:26752822
Nguyen, Peter L.; Davidson, Bennett; Akkina, Sanjeev; Guzman, Grace; Setty, Suman; Kajdacsy-Balla, Andre; Walsh, Michael J.
2015-01-01
High-definition Fourier Transform Infrared (FT-IR) spectroscopic imaging is an emerging approach to obtain detailed images that have associated biochemical information. FT-IR imaging of tissue is based on the principle that different regions of the mid-infrared are absorbed by different chemical bonds (e.g., C=O, C-H, N-H) within cells or tissue that can then be related to the presence and composition of biomolecules (e.g., lipids, DNA, glycogen, protein, collagen). In an FT-IR image, every pixel within the image comprises an entire Infrared (IR) spectrum that can give information on the biochemical status of the cells that can then be exploited for cell-type or disease-type classification. In this paper, we show: how to obtain IR images from human tissues using an FT-IR system, how to modify existing instrumentation to allow for high-definition imaging capabilities, and how to visualize FT-IR images. We then present some applications of FT-IR for pathology using the liver and kidney as examples. FT-IR imaging holds exciting applications in providing a novel route to obtain biochemical information from cells and tissue in an entirely label-free non-perturbing route towards giving new insight into biomolecular changes as part of disease processes. Additionally, this biochemical information can potentially allow for objective and automated analysis of certain aspects of disease diagnosis. PMID:25650759
NASA Astrophysics Data System (ADS)
Delacour, Jacques; Fournier, Laurent; Menu, Jean-Pierre
2005-02-01
In order to provide optimum comfort and safety conditions, information must be seen as clearly as possible by the driver and in all lighting conditions, by day and by night. Therefore, it is becoming fundamental to anticipate in order to predict what the driver will see in a vehicle, in various configurations of scene and observation conditions, so as to optimize the lighting, the ergonomics of the interfaces and the choice of surrounding materials which can be a source of reflection. This information and choices which will depend on it, make it necessary to call upon simulation techniques capable of modeling, globally and simultaneously, the entire light phenomena: surrounding lighting, display technologies, the inside lighting, taking into consideration the multiple reflections caused by the reflection of this light inside the vehicle. This has been the object of an important development, which results in the solution SPEOS Visual Ergonomics, led by company OPTIS. A unique human vision model was developed in collaboration with worldwide specialists in visual perception to transform spectral luminance information into perceived visual information. This model, based on physiological aspects, takes into account the response of the eye to light levels, to color, to contrast, and to ambient lighting, as well as to rapid changes in surrounding luminosity, in accordance with the response of the retina. This unique tool, and information now accessible, enable ergonomists and designers of on board systems to improve the conditions of global visibility, and in so doing the global perception of the environment that the driver will have.
Is orbital volume associated with eyeball and visual cortex volume in humans?
Pearce, Eiluned; Bridge, Holly
2013-01-01
In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (n = 88) and brain and visual cortex (n = 99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes and (iii) different visual cortical areas, independently of overall brain volume. In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices.
Is orbital volume associated with eyeball and visual cortex volume in humans?
Pearce, Eiluned; Bridge, Holly
2013-01-01
Background In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. Aim To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Subjects & Methods Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (N=88), and brain and visual cortex (N=99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. Results A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes, (iii) different visual cortical areas, independently of overall brain volume. Conclusion In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices. PMID:23879766
Conway, Bevil R.; Kanwisher, Nancy G.
2016-01-01
The existence of color-processing regions in the human ventral visual pathway (VVP) has long been known from patient and imaging studies, but their location in the cortex relative to other regions, their selectivity for color compared with other properties (shape and object category), and their relationship to color-processing regions found in nonhuman primates remain unclear. We addressed these questions by scanning 13 subjects with fMRI while they viewed two versions of movie clips (colored, achromatic) of five different object classes (faces, scenes, bodies, objects, scrambled objects). We identified regions in each subject that were selective for color, faces, places, and object shape, and measured responses within these regions to the 10 conditions in independently acquired data. We report two key findings. First, the three previously reported color-biased regions (located within a band running posterior–anterior along the VVP, present in most of our subjects) were sandwiched between face-selective cortex and place-selective cortex, forming parallel bands of face, color, and place selectivity that tracked the fusiform gyrus/collateral sulcus. Second, the posterior color-biased regions showed little or no selectivity for object shape or for particular stimulus categories and showed no interaction of color preference with stimulus category, suggesting that they code color independently of shape or stimulus category; moreover, the shape-biased lateral occipital region showed no significant color bias. These observations mirror results in macaque inferior temporal cortex (Lafer-Sousa and Conway, 2013), and taken together, these results suggest a homology in which the entire tripartite face/color/place system of primates migrated onto the ventral surface in humans over the course of evolution. SIGNIFICANCE STATEMENT Here we report that color-biased cortex is sandwiched between face-selective and place-selective cortex on the bottom surface of the brain in humans. This face/color/place organization mirrors that seen on the lateral surface of the temporal lobe in macaques, suggesting that the entire tripartite system is homologous between species. This result validates the use of macaques as a model for human vision, making possible more powerful investigations into the connectivity, precise neural codes, and development of this part of the brain. In addition, we find substantial segregation of color from shape selectivity in posterior regions, as observed in macaques, indicating a considerable dissociation of the processing of shape and color in both species. PMID:26843649
Value-driven attentional capture in the auditory domain.
Anderson, Brian A
2016-01-01
It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.
Gender-specific contribution of a visual cognition network to reading abilities.
Huestegge, Lynn; Heim, Stefan; Zettelmeyer, Elena; Lange-Küttner, Christiane
2012-02-01
Based on the assumption that boys are more likely to tackle reading based on the visual modality, we assessed reading skills, visual short-term memory (VSTM), visual long-term memory for details (VLTM-D), and general non-verbal cognitive ability in primary school children. Reading was within the normal range in both accuracy and understanding. There was no reading performance gap in favour of girls, on the contrary, in this sample boys read better. An entire array of visual, non-verbal processes was associated directly or indirectly with reading in boys, whereas this pattern was not observed for the girls. ©2011 The British Psychological Society.
The Role of Clarity and Blur in Guiding Visual Attention in Photographs
ERIC Educational Resources Information Center
Enns, James T.; MacDonald, Sarah C.
2013-01-01
Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…
ERIC Educational Resources Information Center
Korat, Ofra; Levin, Iris; Atishkin, Shifra; Turgeman, Merav
2014-01-01
We investigated the effects of three facilitators: adults' support, dynamic visual vocabulary support and static visual vocabulary support on vocabulary acquisition in the context of e-book reading. Participants were 144 Israeli Hebrew-speaking preschoolers (aged 4-6) from middle SES neighborhoods. The entire sample read the e-book without a…
A streaming birefringence study of the flow at the junction of the aorta and the renal arteries
NASA Astrophysics Data System (ADS)
Rankin, G. W.; Sabbah, H. N.; Stein, P. D.
1989-11-01
Streaming birefringence with an organic dye (Milling Yellow) was used to investigate the flow near the junction of the renal arteries and the descending aorta in a model of human vessels. The dye concentration was adjusted to give fluid rheological properties, typical of blood. Steady and pulsatile flow were investigated at branch-to-trunk flow ratios of 0.050 0.350. The flow ratio range over which flow separation and simple secondary flows were identified during systole near the renal ostia are reported. Streaming birefringence has the advantage of allowing visualization of the entire flow field. Also, the fluid rather than suspended particles are observed. An important disadvantage, however, is that three-dimensional flows make interpretation difficult.
Research on metallic material defect detection based on bionic sensing of human visual properties
NASA Astrophysics Data System (ADS)
Zhang, Pei Jiang; Cheng, Tao
2018-05-01
Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.
Optical coherence tomography visualizes neurons in human entorhinal cortex
Magnain, Caroline; Augustinack, Jean C.; Konukoglu, Ender; Frosch, Matthew P.; Sakadžić, Sava; Varjabedian, Ani; Garcia, Nathalie; Wedeen, Van J.; Boas, David A.; Fischl, Bruce
2015-01-01
Abstract. The cytoarchitecture of the human brain is of great interest in diverse fields: neuroanatomy, neurology, neuroscience, and neuropathology. Traditional histology is a method that has been historically used to assess cell and fiber content in the ex vivo human brain. However, this technique suffers from significant distortions. We used a previously demonstrated optical coherence microscopy technique to image individual neurons in several square millimeters of en-face tissue blocks from layer II of the human entorhinal cortex, over 50 μm in depth. The same slices were then sectioned and stained for Nissl substance. We registered the optical coherence tomography (OCT) images with the corresponding Nissl stained slices using a nonlinear transformation. The neurons were then segmented in both images and we quantified the overlap. We show that OCT images contain information about neurons that is comparable to what can be obtained from Nissl staining, and thus can be used to assess the cytoarchitecture of the ex vivo human brain with minimal distortion. With the future integration of a vibratome into the OCT imaging rig, this technique can be scaled up to obtain undistorted volumetric data of centimeter cube tissue blocks in the near term, and entire human hemispheres in the future. PMID:25741528
Impaired spontaneous anthropomorphizing despite intact perception and social knowledge
Heberlein, Andrea S.; Adolphs, Ralph
2004-01-01
Humans spontaneously imbue the world with social meaning: we see not only emotions and intentional behaviors in humans and other animals, but also anger in the movements of thunderstorms and willful sabotage in crashing computers. Converging evidence supports a role for the amygdala, a collection of nuclei in the temporal lobe, in processing emotionally and socially relevant information. Here, we report that a patient with bilateral amygdala damage described a film of animated shapes (normally seen as full of social content) in entirely asocial, geometric terms, despite otherwise normal visual perception. Control tasks showed that the impairment did not result from a global inability to describe social stimuli or a bias in language use, nor was a similar impairment observed in eight comparison subjects with damage to orbitofrontal cortex. This finding extends the role of the amygdala to the social attributions we make even to stimuli that are not explicitly social and, in so doing, suggests that the human capacity for anthropomorphizing draws on some of the same neural systems as do basic emotional responses. PMID:15123799
Clarissa Spoken Dialogue System for Procedure Reading and Navigation
NASA Technical Reports Server (NTRS)
Hieronymus, James; Dowding, John
2004-01-01
Speech is the most natural modality for humans use to communicate with other people, agents and complex systems. A spoken dialogue system must be robust to noise and able to mimic human conversational behavior, like correcting misunderstandings, answering simple questions about the task and understanding most well formed inquiries or commands. The system aims to understand the meaning of the human utterance, and if it does not, then it discards the utterance as being meant for someone else. The first operational system is Clarissa, a conversational procedure reader and navigator, which will be used in a System Development Test Objective (SDTO) on the International Space Station (ISS) during Expedition 10. In the present environment one astronaut reads the procedure on a Manual Procedure Viewer (MPV) or paper, and has to stop to read or turn pages, shifting focus from the task. Clarissa is designed to read and navigate ISS procedures entirely with speech, while the astronaut has his eyes and hands engaged in performing the task. The system also provides an MPV like graphical interface so the procedure can be read visually. A demo of the system will be given.
Good expert knowledge, small scope.
Mayer, Horst
2014-01-01
During many years of occupational stress research, mostly within the German governmental program for "Humanization of Work Life'', remarkable deficits concerning visual work were seen, the most striking being the lack of cooperation between the different experts. With regard to this article hard arguments and ideas for solutions had to be found. A pilot study in 21 enterprises was realized (1602 employees with different visual work tasks). A test set of screening parameters (visual acuity, refraction, phoria, binocular cooperation and efficiency, accommodation range and color vision) were measured. The glasses and/or contact lenses worn were registered and the visual tasks analyzed. In work at visual display units (VDU) the eye movements were recorded and standardized questionnaires were given (health, stress, visual work situation). Because of the heterogeneity of the sample only simple statistics were applied: in groups of different visual work the complaints, symptoms, hassles and uplifts were clustered (SAS software) and correlated with the results of the visual tests. Later a special project in 8 companies (676 employees) was carried out. The results were published in [14]. Discomfort and asthenopic symptoms could be seen as an interaction of the combination of tasks and working conditions with the clusters of individual functionalisms, frequently originating in postural compromises. Mainly three causes for stress could be identified: 1. demands inadequate with regard to intensity, resolution, amount and/or time structure; 2. prevention of elementary perceptive needs; 3. entire use of partial capacities of the visual organ. Symptoms also were correlated with heteronomy. Other findings: influence of adaptation/accommodation ratio, the distracting role of attractors, especially in multitasking jobs; influence of high luminance differences. Dry eyes were very common, they could be attributed to a high screen position, low light, monotonous tasks and office climate. For some parameters a diurnal rhythm could be identified. Nowhere special programs for ageing employees were found: the right glasses; retinal problems and signs of destabilization of vision. In all enterprises, the ergophthalmological and visual ergonomic knowledge of the occupational physicians was poor, visual ergonomists were not available and there was only very poor cooperation with ophthalmologists and optometrists, the first of whom additionally had not much knowledge of modern work.
The Elementary Operations of Human Vision Are Not Reducible to Template-Matching
Neri, Peter
2015-01-01
It is generally acknowledged that biological vision presents nonlinear characteristics, yet linear filtering accounts of visual processing are ubiquitous. The template-matching operation implemented by the linear-nonlinear cascade (linear filter followed by static nonlinearity) is the most widely adopted computational tool in systems neuroscience. This simple model achieves remarkable explanatory power while retaining analytical tractability, potentially extending its reach to a wide range of systems and levels in sensory processing. The extent of its applicability to human behaviour, however, remains unclear. Because sensory stimuli possess multiple attributes (e.g. position, orientation, size), the issue of applicability may be asked by considering each attribute one at a time in relation to a family of linear-nonlinear models, or by considering all attributes collectively in relation to a specified implementation of the linear-nonlinear cascade. We demonstrate that human visual processing can operate under conditions that are indistinguishable from linear-nonlinear transduction with respect to substantially different stimulus attributes of a uniquely specified target signal with associated behavioural task. However, no specific implementation of a linear-nonlinear cascade is able to account for the entire collection of results across attributes; a satisfactory account at this level requires the introduction of a small gain-control circuit, resulting in a model that no longer belongs to the linear-nonlinear family. Our results inform and constrain efforts at obtaining and interpreting comprehensive characterizations of the human sensory process by demonstrating its inescapably nonlinear nature, even under conditions that have been painstakingly fine-tuned to facilitate template-matching behaviour and to produce results that, at some level of inspection, do conform to linear filtering predictions. They also suggest that compliance with linear transduction may be the targeted outcome of carefully crafted nonlinear circuits, rather than default behaviour exhibited by basic components. PMID:26556758
A hierarchical, retinotopic proto-organization of the primate visual system at birth
Arcaro, Michael J; Livingstone, Margaret S
2017-01-01
The adult primate visual system comprises a series of hierarchically organized areas. Each cortical area contains a topographic map of visual space, with different areas extracting different kinds of information from the retinal input. Here we asked to what extent the newborn visual system resembles the adult organization. We find that hierarchical, topographic organization is present at birth and therefore constitutes a proto-organization for the entire primate visual system. Even within inferior temporal cortex, this proto-organization was already present, prior to the emergence of category selectivity (e.g., faces or scenes). We propose that this topographic organization provides the scaffolding for the subsequent development of visual cortex that commences at the onset of visual experience DOI: http://dx.doi.org/10.7554/eLife.26196.001 PMID:28671063
Gage, Julia C.; Rodriguez, Ana Cecilia; Schiffman, Mark; Adadevoh, Sydney; Alvarez Larraondo, Manuel J.; Chumworathayi, Bandit; Lejarza, Sandra Vargas; Araya, Luis Villegas; Garcia, Francisco; Budihas, Scott R.; Long, Rodney; Katki, Hormuzd A.; Herrero, Rolando; Burk, Robert D.; Jeronimo, Jose
2010-01-01
Objectives To estimate efficacy of a visual triage of human papillomavirus (HPV)– positive women to either immediate cryotherapy or referral if not treatable (eg, invasive cancer, large precancers). Methods We evaluated visual triage in the HPV-positive women aged 25 to 55 years from the 10,000-woman Guanacaste Cohort Study (n = 552). Twelve Peruvian midwives and 5 international gynecologists assessed treatability by cryotherapy using digitized high-resolution cervical images taken at enrollment. The reference standard of treatability was determined by 2 lead gynecologists from the entire 7-year follow-up of the women. Women diagnosed with histologic cervical intraepithelial neoplasia grade 2 or worse or 5-year persistence of carcinogenic HPV infection were defined as needing treatment. Results Midwives and gynecologists judged 30.8% and 41.2% of women not treatable by cryotherapy, respectively (P < 0.01). Among 149 women needing treatment, midwives and gynecologists correctly identified 57.5% and 63.8% (P = 0.07 for difference) of 71 women judged not treatable by the lead gynecologists and 77.6% and 59.7% (P < 0.01 for difference) of 78 women judged treatable by cryotherapy. The proportion of women judged not treatable by a reviewer varied widely and ranged from 18.6%to 61.1%. Interrater agreement was poor with mean pairwise overall agreement of 71.4% and 66.3% and κ ’s of 0.33 and 0.30 for midwives and gynecologists, respectively. Conclusions In future “screen-and-treat” cervical cancer prevention programs using HPV testing and cryotherapy, practitioners will visually triage HPV-positive women. The suboptimal performance of visual triage suggests that screen-and-treat programs using cryotherapy might be insufficient for treating precancerous lesions. Improved, low-technology triage methods and/or improved safe and low-technology treatment options are needed. PMID:19509579
50 CFR 679.30 - General CDQ regulations.
Code of Federal Regulations, 2011 CFR
2011-10-01
... visual representation of the qualified applicant's entire organizational structure, including all... narrative description of how the CDQ group intends to harvest and process its CDQ allocations, including a...
Vertical visual features have a strong influence on cuttlefish camouflage.
Ulmer, K M; Buresch, K C; Kossodo, M M; Mäthger, L M; Siemann, L A; Hanlon, R T
2013-04-01
Cuttlefish and other cephalopods use visual cues from their surroundings to adaptively change their body pattern for camouflage. Numerous previous experiments have demonstrated the influence of two-dimensional (2D) substrates (e.g., sand and gravel habitats) on camouflage, yet many marine habitats have varied three-dimensional (3D) structures among which cuttlefish camouflage from predators, including benthic predators that view cuttlefish horizontally against such 3D backgrounds. We conducted laboratory experiments, using Sepia officinalis, to test the relative influence of horizontal versus vertical visual cues on cuttlefish camouflage: 2D patterns on benthic substrates were tested versus 2D wall patterns and 3D objects with patterns. Specifically, we investigated the influence of (i) quantity and (ii) placement of high-contrast elements on a 3D object or a 2D wall, as well as (iii) the diameter and (iv) number of 3D objects with high-contrast elements on cuttlefish body pattern expression. Additionally, we tested the influence of high-contrast visual stimuli covering the entire 2D benthic substrate versus the entire 2D wall. In all experiments, visual cues presented in the vertical plane evoked the strongest body pattern response in cuttlefish. These experiments support field observations that, in some marine habitats, cuttlefish will respond to vertically oriented background features even when the preponderance of visual information in their field of view seems to be from the 2D surrounding substrate. Such choices highlight the selective decision-making that occurs in cephalopods with their adaptive camouflage capability.
Hamamé, Carlos M; Cosmelli, Diego; Henriquez, Rodrigo; Aboitiz, Francisco
2011-04-26
Humans and other animals change the way they perceive the world due to experience. This process has been labeled as perceptual learning, and implies that adult nervous systems can adaptively modify the way in which they process sensory stimulation. However, the mechanisms by which the brain modifies this capacity have not been sufficiently analyzed. We studied the neural mechanisms of human perceptual learning by combining electroencephalographic (EEG) recordings of brain activity and the assessment of psychophysical performance during training in a visual search task. All participants improved their perceptual performance as reflected by an increase in sensitivity (d') and a decrease in reaction time. The EEG signal was acquired throughout the entire experiment revealing amplitude increments, specific and unspecific to the trained stimulus, in event-related potential (ERP) components N2pc and P3 respectively. P3 unspecific modification can be related to context or task-based learning, while N2pc may be reflecting a more specific attentional-related boosting of target detection. Moreover, bell and U-shaped profiles of oscillatory brain activity in gamma (30-60 Hz) and alpha (8-14 Hz) frequency bands may suggest the existence of two phases for learning acquisition, which can be understood as distinctive optimization mechanisms in stimulus processing. We conclude that there are reorganizations in several neural processes that contribute differently to perceptual learning in a visual search task. We propose an integrative model of neural activity reorganization, whereby perceptual learning takes place as a two-stage phenomenon including perceptual, attentional and contextual processes.
Graph properties of synchronized cortical networks during visual working memory maintenance.
Palva, Satu; Monto, Simo; Palva, J Matias
2010-02-15
Oscillatory synchronization facilitates communication in neuronal networks and is intimately associated with human cognition. Neuronal activity in the human brain can be non-invasively imaged with magneto- (MEG) and electroencephalography (EEG), but the large-scale structure of synchronized cortical networks supporting cognitive processing has remained uncharacterized. We combined simultaneous MEG and EEG (MEEG) recordings with minimum-norm-estimate-based inverse modeling to investigate the structure of oscillatory phase synchronized networks that were active during visual working memory (VWM) maintenance. Inter-areal phase-synchrony was quantified as a function of time and frequency by single-trial phase-difference estimates of cortical patches covering the entire cortical surfaces. The resulting networks were characterized with a number of network metrics that were then compared between delta/theta- (3-6 Hz), alpha- (7-13 Hz), beta- (16-25 Hz), and gamma- (30-80 Hz) frequency bands. We found several salient differences between frequency bands. Alpha- and beta-band networks were more clustered and small-world like but had smaller global efficiency than the networks in the delta/theta and gamma bands. Alpha- and beta-band networks also had truncated-power-law degree distributions and high k-core numbers. The data converge on showing that during the VWM-retention period, human cortical alpha- and beta-band networks have a memory-load dependent, scale-free small-world structure with densely connected core-like structures. These data further show that synchronized dynamic networks underlying a specific cognitive state can exhibit distinct frequency-dependent network structures that could support distinct functional roles. Copyright 2009 Elsevier Inc. All rights reserved.
Proteogenomics Dashboard for the Human Proteome Project.
Tabas-Madrid, Daniel; Alves-Cruzeiro, Joao; Segura, Victor; Guruceaga, Elizabeth; Vialas, Vital; Prieto, Gorka; García, Carlos; Corrales, Fernando J; Albar, Juan Pablo; Pascual-Montano, Alberto
2015-09-04
dasHPPboard is a novel proteomics-based dashboard that collects and reports the experiments produced by the Spanish Human Proteome Project consortium (SpHPP) and aims to help HPP to map the entire human proteome. We have followed the strategy of analog genomics projects like the Encyclopedia of DNA Elements (ENCODE), which provides a vast amount of data on human cell lines experiments. The dashboard includes results of shotgun and selected reaction monitoring proteomics experiments, post-translational modifications information, as well as proteogenomics studies. We have also processed the transcriptomics data from the ENCODE and Human Body Map (HBM) projects for the identification of specific gene expression patterns in different cell lines and tissues, taking special interest in those genes having little proteomic evidence available (missing proteins). Peptide databases have been built using single nucleotide variants and novel junctions derived from RNA-Seq data that can be used in search engines for sample-specific protein identifications on the same cell lines or tissues. The dasHPPboard has been designed as a tool that can be used to share and visualize a combination of proteomic and transcriptomic data, providing at the same time easy access to resources for proteogenomics analyses. The dasHPPboard can be freely accessed at: http://sphppdashboard.cnb.csic.es.
The development of interactive online learning tools for the study of anatomy.
O'Byrne, Patrick J; Patry, Anne; Carnegie, Jacqueline A
2008-01-01
The study of human anatomy is a core component of health science programs. However large student enrolments and the content-packed curricula associated with these programs have made it difficult for students to have regular access to cadaver laboratories. Adobe Flash MXwas used with cadaver digital photographs and textbook-derived illustrations to develop interactive anatomy images that were made available to undergraduate health science students enrolled in first-year combined anatomy and physiology (ANP) courses at the University of Ottawa. Colour coding was used to direct student attention, facilitate name-structure association, improve visualization of structure contours, assist students in the construction of anatomical pathways, and to reinforce functional or anatomical groupings. The ability of two-dimensional media to support the visualization of three-dimensional structure was extended by developing the fade-through image (students use a sliding bar to move through tissues) as well as the rotating image in which entire organs such as the skull were photographed at eight angles of rotation. Finally, students were provided with interactive exercises that they could repeatedly try to obtain immediate feedback regarding their learning progress. Survey data revealed that the learning and self-testing tools were used widely and that students found them relevant and supportive of their self-learning. Interestingly, student summative examination outcomes did not differ between those students who had access to the online tools and a corresponding student group from the previous academic year who did not. Interactive learning tools can be tailored to meet program-specific learning objectives as a cost-effective means of facilitating the study of human anatomy. Virtual interactive anatomy exercises provide learning opportunities for students outside the lecture room that are of especial value to visual and kinesthetic learners.
NASA Astrophysics Data System (ADS)
Dong, Jing; Gora, Michalina J.; Beaulieu-Ouellet, Emilie; Queneherve, Lucille H.; Grant, Catriona N.; Rosenberg, Mireille; Nishioka, Norman S.; Fasano, Alessio; Tearney, Guillermo J.
2017-02-01
Celiac disease (CD) affects around 1% of the global population and can cause serious long-term symptoms including malnutrition, fatigue, and diarrhea, amongst others. Despite this, it is often left undiagnosed. Currently, a tissue diagnosis of CD is made by random endoscopic biopsy of the duodenum to confirm the existence of microscopic morphologic alterations in the intestinal mucosa. However, duodenal endoscopic biopsy is problematic because the morphological changes can be focal and endoscopic biopsy is plagued by sampling error. Additionally, tissue artifacts can also an issue because cuts in the transverse plane can make duodenal villi appear artifactually shortened and can bias the assessment of intraepithelial inflammation. Moreover, endoscopic biopsy is costly and poorly tolerated as the patient needs to be sedated to perform the procedure. Our lab has previously developed technology termed tethered capsule OCT endomicroscopy (TCE) to overcome these diagnostic limitations of endoscopy. TCE involves swallowing an optomechanically-engineered pill that generates 3D images of the GI tract as it traverses the lumen of the organ via peristalsis, assisted by gravity. In several patients we have demonstrated TCE imaging of duodenal villi, however the current TCE device design is not optimal for CD diagnosis as the villi compress when in contact with the smooth capsule's wall. In this work, we present methods for structuring the outer surface of the capsule to improve the visualization of the villi height and crypt depth. Preliminary results in humans suggest that new TCE capsule enables better visualization of villous architecture, making it possibly to comprehensively scan the entire duodenum to obtain a more accurate tissue diagnosis of CD.
Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L
2004-02-09
Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.
NASA Astrophysics Data System (ADS)
Babbar-Sebens, M.; Mukhopadhyay, S.
2014-12-01
Web 2.0 technologies are useful resources for reaching out to larger stakeholder communities and involve them in policy making and planning efforts. While these technologies have been used in the past to support education and communication endeavors, we have developed a novel, web-based, interactive planning tool that involves the community in using science-based methods for the design of potential runoff management strategies on their landscape. The tool, Watershed REstoration using Spatio-Temporal Optimization of Resources (WRESTORE), uses a democratic voting process coupled with visualization interfaces, computational simulation and optimization models, and user modeling techniques to support a human-centered design approach. The tool can be used to engage diverse watershed stakeholders and landowners via the internet, thereby improving opportunities for outreach and collaborations. Users are able to (a) design multiple types of conservation practices at their field-scale catchment and at the entire watershed scale, (b) examine impacts and limitations of their decisions on their neighboring catchments and on the entire watershed, (c) compare alternatives via a cost-benefit analysis, (d) vote on their "favorite" designs based on their preferences and constraints, and (e) propose their "favorite" alternatives to policy makers and other stakeholders. In this presentation, we will demonstrate the effectiveness of WRESTORE for designing alternatives of conservation practices to reduce peak flows in a Midwestern watershed, present results on multiple approaches for engaging with larger communities, and discuss potential for future developments.
Molloy, Carly S; Wilson-Ching, Michelle; Doyle, Lex W; Anderson, Vicki A; Anderson, Peter J
2014-04-01
Contemporary data on visual memory and learning in survivors born extremely preterm (EP; <28 weeks gestation) or with extremely low birth weight (ELBW; <1,000 g) are lacking. Geographically determined cohort study of 298 consecutive EP/ELBW survivors born in 1991 and 1992, and 262 randomly selected normal-birth-weight controls. Visual learning and memory data were available for 221 (74.2%) EP/ELBW subjects and 159 (60.7%) controls. EP/ELBW adolescents exhibited significantly poorer performance across visual memory and learning variables compared with controls. Visual learning and delayed visual memory were particularly problematic and remained so after controlling for visual-motor integration and visual perception and excluding adolescents with neurosensory disability, and/or IQ <70. Male EP/ELBW adolescents or those treated with corticosteroids had poorer outcomes. EP/ELBW adolescents have poorer visual memory and learning outcomes compared with controls, which cannot be entirely explained by poor visual perceptual or visual constructional skills or intellectual impairment.
Noma, Kazuhiro; Shirakawa, Yasuhiro; Kanaya, Nobuhiko; Okada, Tsuyoshi; Maeda, Naoaki; Ninomiya, Takayuki; Tanabe, Shunsuke; Sakurama, Kazufumi; Fujiwara, Toshiyoshi
2018-03-01
Evaluation of the blood supply to gastric conduits is critically important to avoid complications after esophagectomy. We began visual evaluation of blood flow using indocyanine green (ICG) fluorescent imaging in July 2015, to reduce reconstructive complications. In this study, we aimed to statistically verify the efficacy of blood flow evaluation using our simplified ICG method. A total of 285 consecutive patients who underwent esophagectomy and gastric conduit reconstruction were reviewed and divided into 2 groups: before and after introduction of ICG evaluation. The entire cohort and 68 patient pairs after propensity score matching (PS-M) were evaluated for clinical outcomes and the effect of visualized evaluation on reducing the risk of complication. The leakage rate in the ICG group was significantly lower than in the non-ICG group for each severity grade, both in the entire cohort (285 subjects) and after PS-M; the rates of other major complications, including recurrent laryngeal nerve palsy and pneumonia, were not different. The duration of postoperative ICU stay was approximately 1 day shorter in the ICG group than in the non-ICG group in the entire cohort, and approximately 2 days shorter after PS-M. Visualized evaluation of blood flow with ICG methods significantly reduced the rate of anastomotic complications of all Clavien-Dindo (CD) grades. Odds ratios for ICG evaluation decreased with CD grade (0.3419 for CD ≥ 1; 0.241 for CD ≥ 2; and 0.2153 for CD ≥ 3). Objective evaluation of blood supply to the reconstructed conduit using ICG fluorescent imaging reduces the risk and degree of anastomotic complication. Copyright © 2017 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram
2006-04-01
The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.
Visual pattern image sequence coding
NASA Technical Reports Server (NTRS)
Silsbee, Peter; Bovik, Alan C.; Chen, Dapang
1990-01-01
The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.
Global versus local adaptation in fly motion-sensitive neurons
Neri, Peter; Laughlin, Simon B
2005-01-01
Flies, like humans, experience a well-known consequence of adaptation to visual motion, the waterfall illusion. Direction-selective neurons in the fly lobula plate permit a detailed analysis of the mechanisms responsible for motion adaptation and their function. Most of these neurons are spatially non-opponent, they sum responses to motion in the preferred direction across their entire receptive field, and adaptation depresses responses by subtraction and by reducing contrast gain. When we adapted a small area of the receptive field to motion in its anti-preferred direction, we discovered that directional gain at unadapted regions was enhanced. This novel phenomenon shows that neuronal responses to the direction of stimulation in one area of the receptive field are dynamically adjusted to the history of stimulation both within and outside that area. PMID:16191636
O'Connell, Caitlin; Ho, Leon C; Murphy, Matthew C; Conner, Ian P; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C
2016-11-09
Human visual performance has been observed to show superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine whether the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI, respectively, in 15 healthy individuals at 3 T. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In diffusion kurtosis MRI, the brain regions mapping to the lower visual field showed higher mean kurtosis, but not fractional anisotropy or mean diffusivity compared with the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing.
Bang, Seungmin; Park, Jeong Youp; Jeong, Seok; Kim, Young Ho; Shim, Han Bo; Kim, Tae Song; Lee, Don Haeng; Song, Si Young
2009-02-01
We developed a capsule endoscope (CE), "MiRo," with the novel transmission technology of electric-field propagation. The technology uses the human body as a conductive medium for data transmission. Specifications of the prototype include the ability to receive real-time images; size, 10.8 x 24 mm; weight, 3.3 g; field of view, 150 degrees; resolution of power, 320 x 320 pixels; and transmittal speed, 2 frames per second. To evaluate the clinical safety and diagnostic feasibility of the prototype MiRo, we conducted a multicenter clinical trial. All volunteers underwent baseline examinations, including EGD and electrocardiography for the screening of GI obstructive and cardiovascular diseases, before the trial. In the first 10 cases, 24-hour Holter monitoring was also performed. To evaluate the diagnostic feasibility, transmission rate of the captured images, inspection rate of the entire small bowel, and quality of transmitted images (graded as outstanding, excellent, good/average, below average, and poor) were analyzed. Of the 49 healthy volunteers, 45 were included in the trial, and 4 were excluded because of baseline abnormalities. No adverse effects were noted. All CEs were expelled within 2 days, and the entire small bowel could be explored in all cases. The transmission rates of the captured image in the stomach, small bowel, and colon were 99.5%, 99.6%, and 97.2%, respectively. The mean total duration of image transmission was 9 hours, 51 minutes, and the mean transit time of the entire small bowel was 4 hours, 33 minutes. Image quality was graded as good or better in 41 cases (91.1%). Details of the villi and vascular structures of the entire small bowel were clearly visualized in 31 cases (68.9%). MiRo is safe and effective for exploring the entire small bowel, with good image quality and real-time feasibility. This novel transmission technology may have applications beyond the field of capsule endoscopy.
Contributions of visual and embodied expertise to body perception.
Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D
2012-01-01
Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.
Reina, Miguel A; Lirk, Philipp; Puigdellívol-Sánchez, Anna; Mavar, Marija; Prats-Galino, Alberto
2016-03-01
The ligamentum flavum (LF) forms the anatomic basis for the loss-of-resistance technique essential to the performance of epidural anesthesia. However, the LF presents considerable interindividual variability, including the possibility of midline gaps, which may influence the performance of epidural anesthesia. We devise a method to reconstruct the anatomy of the digitally LF based on magnetic resonance images to clarify the exact limits and edges of LF and its different thickness, depending on the area examined, while avoiding destructive methods, as well as the dissection processes. Anatomic cadaveric cross sections enabled us to visually check the definition of the edges along the entire LF and compare them using 3D image reconstruction methods. Reconstruction was performed in images obtained from 7 patients. Images from 1 patient were used as a basis for the 3D spinal anatomy tool. In parallel, axial cuts, 2 to 3 cm thick, were performed in lumbar spines of 4 frozen cadavers. This technique allowed us to identify the entire ligament and its exact limits, while avoiding alterations resulting from cutting processes or from preparation methods. The LF extended between the laminas of adjacent vertebrae at all vertebral levels of the patients examined, but midline gaps are regularly encountered. These anatomical variants were reproduced in a 3D portable document format. The major anatomical features of the LF were reproduced in the 3D model. Details of its structure and variations of thickness in successive sagittal and axial slides could be visualized. Gaps within LF previously studied in cadavers have been identified in our interactive 3D model, which may help to understand their nature, as well as possible implications for epidural techniques.
O’Connell, Caitlin; Ho, Leon C.; Murphy, Matthew C.; Conner, Ian P.; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C.
2016-01-01
Human visual performance has been observed to exhibit superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine if the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI (DKI), respectively in 15 healthy individuals at 3 Tesla. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In DKI, the brain regions mapping to the lower visual field exhibited higher mean kurtosis but not fractional anisotropy or mean diffusivity when compared to the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing. PMID:27631541
Dashboard visualizations: Supporting real-time throughput decision-making.
Franklin, Amy; Gantela, Swaroop; Shifarraw, Salsawit; Johnson, Todd R; Robinson, David J; King, Brent R; Mehta, Amit M; Maddow, Charles L; Hoot, Nathan R; Nguyen, Vickie; Rubio, Adriana; Zhang, Jiajie; Okafor, Nnaemeka G
2017-07-01
Providing timely and effective care in the emergency department (ED) requires the management of individual patients as well as the flow and demands of the entire department. Strategic changes to work processes, such as adding a flow coordination nurse or a physician in triage, have demonstrated improvements in throughput times. However, such global strategic changes do not address the real-time, often opportunistic workflow decisions of individual clinicians in the ED. We believe that real-time representation of the status of the entire emergency department and each patient within it through information visualizations will better support clinical decision-making in-the-moment and provide for rapid intervention to improve ED flow. This notion is based on previous work where we found that clinicians' workflow decisions were often based on an in-the-moment local perspective, rather than a global perspective. Here, we discuss the challenges of designing and implementing visualizations for ED through a discussion of the development of our prototype Throughput Dashboard and the potential it holds for supporting real-time decision-making. Copyright © 2017. Published by Elsevier Inc.
First trimester size charts of embryonic brain structures.
Gijtenbeek, M; Bogers, H; Groenenberg, I A L; Exalto, N; Willemsen, S P; Steegers, E A P; Eilers, P H C; Steegers-Theunissen, R P M
2014-02-01
Can reliable size charts of human embryonic brain structures be created from three-dimensional ultrasound (3D-US) visualizations? Reliable size charts of human embryonic brain structures can be created from high-quality images. Previous studies on the visualization of both the cavities and the walls of the brain compartments were performed using 2D-US, 3D-US or invasive intrauterine sonography. However, the walls of the diencephalon, mesencephalon and telencephalon have not been measured non-invasively before. Last-decade improvements in transvaginal ultrasound techniques allow a better visualization and offer the tools to measure these human embryonic brain structures with precision. This study is embedded in a prospective periconceptional cohort study. A total of 141 pregnancies were included before the sixth week of gestation and were monitored until delivery to assess complications and adverse outcomes. For the analysis of embryonic growth, 596 3D-US scans encompassing the entire embryo were obtained from 106 singleton non-malformed live birth pregnancies between 7(+0) and 12(+6) weeks' gestational age (GA). Using 4D View (3D software) the measured embryonic brain structures comprised thickness of the diencephalon, mesencephalon and telencephalon, and the total diameter of the diencephalon and mesencephalon. Of 596 3D scans, 161 (27%) high-quality scans of 79 pregnancies were eligible for analysis. The reliability of all embryonic brain structure measurements, based on the intra-class correlation coefficients (ICCs) (all above 0.98), was excellent. Bland-Altman plots showed moderate agreement for measurements of the telencephalon, but for all other measurements the agreement was good. Size charts were constructed according to crown-rump length (CRL). The percentage of high-quality scans suitable for analysis of these brain structures was low (27%). The size charts of human embryonic brain structures can be used to study normal and abnormal development of brain development in future. Also, the effects of periconceptional maternal exposures, such as folic acid supplement use and smoking, on human embryonic brain development can be a topic of future research. This study was supported by the Department of Obstetrics and Gynaecology of the Erasmus University Medical Center. M.G. was supported by an additional grant from the Sophia Foundation for Medical Research (SSWO grant number 644). No competing interests are declared.
Retinal projections in the electric catfish (Malapterurus electricus).
Ebbesson, S O; O'Donnel, D
1980-01-01
The poorly developed visual system of the electric catfish was studied with silver-degeneration methods. Retinal projections were entirely contralateral to the hypothalamic optic nucleus, the lateral geniculate nucleus, the dorsomedial optic nucleus, the pretectal nuclei including the cortical nucleus, and the optic tectum. The small size and lack of differentiation of the visual system in the electric catfish suggest a relatively small role for this sensory system in this species.
Assessing GPS Constellation Resiliency in an Urban Canyon Environment
2015-03-26
Taipei, Taiwan as his area of interest. His GPS constellation is modeled in the Satellite Toolkit ( STK ) where augmentation satellites can be added and...interaction. SEAS also provides a visual display of the simulation which is useful for verification and debugging portions of the analysis. Furthermore...entire system. Interpreting the model is aided by the visual display of the agents moving in the region of inter- est. Furthermore, SEAS collects
3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.
Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S
2015-10-20
Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
How cortical neurons help us see: visual recognition in the human brain
Blumberg, Julie; Kreiman, Gabriel
2010-01-01
Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161
Myofiber Architecture of the Human Atria as Revealed by Submillimeter Diffusion Tensor Imaging.
Pashakhanloo, Farhad; Herzka, Daniel A; Ashikaga, Hiroshi; Mori, Susumu; Gai, Neville; Bluemke, David A; Trayanova, Natalia A; McVeigh, Elliot R
2016-04-01
Accurate knowledge of the human atrial fibrous structure is paramount in understanding the mechanisms of atrial electric function in health and disease. Thus far, such knowledge has been acquired from destructive sectioning, and there is a paucity of data about atrial fiber architecture variability in the human population. In this study, we have developed a customized 3-dimensional diffusion tensor magnetic resonance imaging sequence on a clinical scanner that makes it possible to image an entire intact human heart specimen ex vivo at submillimeter resolution. The data from 8 human atrial specimens obtained with this technique present complete maps of the fibrous organization of the human atria. The findings demonstrate that the main features of atrial anatomy are mostly preserved across subjects although the exact location and orientation of atrial bundles vary. Using the full tractography data, we were able to cluster, visualize, and characterize the distinct major bundles in the human atria. Furthermore, quantitative characterization of the fiber angles across the atrial wall revealed that the transmural fiber angle distribution is heterogeneous throughout different regions of the atria. The application of submillimeter diffusion tensor magnetic resonance imaging provides an unprecedented level of information on both human atrial structure, as well as its intersubject variability. The high resolution and fidelity of this data could enhance our understanding of structural contributions to atrial rhythm and pump disorders and lead to improvements in their targeted treatment. © 2016 American Heart Association, Inc.
Madden, David J.
2007-01-01
Older adults are often slower and less accurate than are younger adults in performing visual-search tasks, suggesting an age-related decline in attentional functioning. Age-related decline in attention, however, is not entirely pervasive. Visual search that is based on the observer’s expectations (i.e., top-down attention) is relatively preserved as a function of adult age. Neuroimaging research suggests that age-related decline occurs in the structure and function of brain regions mediating the visual sensory input, whereas activation of regions in the frontal and parietal lobes is often greater for older adults than for younger adults. This increased activation may represent an age-related increase in the role of top-down attention during visual tasks. To obtain a more complete account of age-related decline and preservation of visual attention, current research is beginning to explore the relation of neuroimaging measures of brain structure and function to behavioral measures of visual attention. PMID:18080001
Availability Issues in Wireless Visual Sensor Networks
Costa, Daniel G.; Silva, Ivanovitch; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo
2014-01-01
Wireless visual sensor networks have been considered for a large set of monitoring applications related with surveillance, tracking and multipurpose visual monitoring. When sensors are deployed over a monitored field, permanent faults may happen during the network lifetime, reducing the monitoring quality or rendering parts or the entire network unavailable. In a different way from scalar sensor networks, camera-enabled sensors collect information following a directional sensing model, which changes the notions of vicinity and redundancy. Moreover, visual source nodes may have different relevancies for the applications, according to the monitoring requirements and cameras' poses. In this paper we discuss the most relevant availability issues related to wireless visual sensor networks, addressing availability evaluation and enhancement. Such discussions are valuable when designing, deploying and managing wireless visual sensor networks, bringing significant contributions to these networks. PMID:24526301
Global facilitation of attended features is obligatory and restricts divided attention.
Andersen, Søren K; Hillyard, Steven A; Müller, Matthias M
2013-11-13
In many common situations such as driving an automobile it is advantageous to attend concurrently to events at different locations (e.g., the car in front, the pedestrian to the side). While spatial attention can be divided effectively between separate locations, studies investigating attention to nonspatial features have often reported a "global effect", whereby items having the attended feature may be preferentially processed throughout the entire visual field. These findings suggest that spatial and feature-based attention may at times act in direct opposition: spatially divided foci of attention cannot be truly independent if feature attention is spatially global and thereby affects all foci equally. In two experiments, human observers attended concurrently to one of two overlapping fields of dots of different colors presented in both the left and right visual fields. When the same color or two different colors were attended on the two sides, deviant targets were detected accurately, and visual-cortical potentials elicited by attended dots were enhanced. However, when the attended color on one side matched the ignored color on the opposite side, attentional modulation of cortical potentials was abolished. This loss of feature selectivity could be attributed to enhanced processing of unattended items that shared the color of the attended items in the opposite field. Thus, while it is possible to attend to two different colors at the same time, this ability is fundamentally constrained by spatially global feature enhancement in early visual-cortical areas, which is obligatory and persists even when it explicitly conflicts with task demands.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eric A. Wernert; William R. Sherman; Patrick O'Leary
Immersive visualization makes use of the medium of virtual reality (VR) - it is a subset of virtual reality focused on the application of VR technologies to scientific and information visualization. As the name implies, there is a particular focus on the physically immersive aspect of VR that more fully engages the perceptual and kinesthetic capabilities of the scientist with the goal of producing greater insight. The immersive visualization community is uniquely positioned to address the analysis needs of the wide spectrum of domain scientists who are becoming increasingly overwhelmed by data. The outputs of computational science simulations and high-resolutionmore » sensors are creating a data deluge. Data is coming in faster than it can be analyzed, and there are countless opportunities for discovery that are missed as the data speeds by. By more fully utilizing the scientists visual and other sensory systems, and by offering a more natural user interface with which to interact with computer-generated representations, immersive visualization offers great promise in taming this data torrent. However, increasing the adoption of immersive visualization in scientific research communities can only happen by simultaneously lowering the engagement threshold while raising the measurable benefits of adoption. Scientists time spent immersed with their data will thus be rewarded with higher productivity, deeper insight, and improved creativity. Immersive visualization ties together technologies and methodologies from a variety of related but frequently disjoint areas, including hardware, software and human-computer interaction (HCI) disciplines. In many ways, hardware is a solved problem. There are well established technologies including large walk-in systems such as the CAVE{trademark} and head-based systems such as the Wide-5{trademark}. The advent of new consumer-level technologies now enable an entirely new generation of immersive displays, with smaller footprints and costs, widening the potential consumer base. While one would be hard-pressed to call software a solved problem, we now understand considerably more about best practices for designing and developing sustainable, scalable software systems, and we have useful software examples that illuminate the way to even better implementations. As with any research endeavour, HCI will always be exploring new topics in interface design, but we now have a sizable knowledge base of the strengths and weaknesses of the human perceptual systems and we know how to design effective interfaces for immersive systems. So, in a research landscape with a clear need for better visualization and analysis tools, a methodology in immersive visualization that has been shown to effectively address some of those needs, and vastly improved supporting technologies and knowledge of hardware, software, and HCI, why hasn't immersive visualization 'caught on' more with scientists? What can we do as a community of immersive visualization researchers and practitioners to facilitate greater adoption by scientific communities so as to make the transition from 'the promise of virtual reality' to 'the reality of virtual reality'.« less
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Capturing specific abilities as a window into human individuality: the example of face recognition.
Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken
2012-01-01
Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.
Multiple Transmitter Receptors in Regions and Layers of the Human Cerebral Cortex
Zilles, Karl; Palomero-Gallagher, Nicola
2017-01-01
We measured the densities (fmol/mg protein) of 15 different receptors of various transmitter systems in the supragranular, granular and infragranular strata of 44 areas of visual, somatosensory, auditory and multimodal association systems of the human cerebral cortex. Receptor densities were obtained after labeling of the receptors using quantitative in vitro receptor autoradiography in human postmortem brains. The mean density of each receptor type over all cortical layers and of each of the three major strata varies between cortical regions. In a single cortical area, the multi-receptor fingerprints of its strata (i.e., polar plots, each visualizing the densities of multiple different receptor types in supragranular, granular or infragranular layers of the same cortical area) differ in shape and size indicating regional and laminar specific balances between the receptors. Furthermore, the three strata are clearly segregated into well definable clusters by their receptor fingerprints. Fingerprints of different cortical areas systematically vary between functional networks, and with the hierarchical levels within sensory systems. Primary sensory areas are clearly separated from all other cortical areas particularly by their very high muscarinic M2 and nicotinic α4β2 receptor densities, and to a lesser degree also by noradrenergic α2 and serotonergic 5-HT2 receptors. Early visual areas of the dorsal and ventral streams are segregated by their multi-receptor fingerprints. The results are discussed on the background of functional segregation, cortical hierarchies, microstructural types, and the horizontal (layers) and vertical (columns) organization in the cerebral cortex. We conclude that a cortical column is composed of segments, which can be assigned to the cortical strata. The segments differ by their patterns of multi-receptor balances, indicating different layer-specific signal processing mechanisms. Additionally, the differences between the strata-and area-specific fingerprints of the 44 areas reflect the segregation of the cerebral cortex into functionally and topographically definable groups of cortical areas (visual, auditory, somatosensory, limbic, motor), and reveals their hierarchical position (primary and unimodal (early) sensory to higher sensory and finally to multimodal association areas). Highlights Densities of transmitter receptors vary between areas of human cerebral cortex.Multi-receptor fingerprints segregate cortical layers.The densities of all examined receptor types together reach highest values in the supragranular stratum of all areas.The lowest values are found in the infragranular stratum.Multi-receptor fingerprints of entire areas and their layers segregate functional systemsCortical types (primary sensory, motor, multimodal association) differ in their receptor fingerprints. PMID:28970785
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor 'L's and a target 'T', was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search.
Zang, Xuelian; Geyer, Thomas; Assumpção, Leonardo; Müller, Hermann J.; Shi, Zhuanghua
2016-01-01
Selective attention determines the effectiveness of implicit contextual learning (e.g., Jiang and Leung, 2005). Visual foreground-background segmentation, on the other hand, is a key process in the guidance of attention (Wolfe, 2003). In the present study, we examined the impact of foreground-background segmentation on contextual cueing of visual search in three experiments. A visual search display, consisting of distractor ‘L’s and a target ‘T’, was overlaid on a task-neutral cuboid on the same depth plane (Experiment 1), on stereoscopically separated depth planes (Experiment 2), or spread over the entire display on the same depth plane (Experiment 3). Half of the search displays contained repeated target-distractor arrangements, whereas the other half was always newly generated. The task-neutral cuboid was constant during an initial training session, but was either rotated by 90° or entirely removed in the subsequent test sessions. We found that the gains resulting from repeated presentation of display arrangements during training (i.e., contextual-cueing effects) were diminished when the cuboid was changed or removed in Experiment 1, but remained intact in Experiments 2 and 3 when the cuboid was placed in a different depth plane, or when the items were randomly spread over the whole display but not on the edges of the cuboid. These findings suggest that foreground-background segmentation occurs prior to contextual learning, and only objects/arrangements that are grouped as foreground are learned over the course of repeated visual search. PMID:27375530
Experience, Context, and the Visual Perception of Human Movement
ERIC Educational Resources Information Center
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
2004-01-01
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
Mapping visual cortex in monkeys and humans using surface-based atlases
NASA Technical Reports Server (NTRS)
Van Essen, D. C.; Lewis, J. W.; Drury, H. A.; Hadjikhani, N.; Tootell, R. B.; Bakircioglu, M.; Miller, M. I.
2001-01-01
We have used surface-based atlases of the cerebral cortex to analyze the functional organization of visual cortex in humans and macaque monkeys. The macaque atlas contains multiple partitioning schemes for visual cortex, including a probabilistic atlas of visual areas derived from a recent architectonic study, plus summary schemes that reflect a combination of physiological and anatomical evidence. The human atlas includes a probabilistic map of eight topographically organized visual areas recently mapped using functional MRI. To facilitate comparisons between species, we used surface-based warping to bring functional and geographic landmarks on the macaque map into register with corresponding landmarks on the human map. The results suggest that extrastriate visual cortex outside the known topographically organized areas is dramatically expanded in human compared to macaque cortex, particularly in the parietal lobe.
Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment
NASA Technical Reports Server (NTRS)
Frische, F.; Osterloh, J.-P.; Luedtke, A.
2011-01-01
This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.
Computing Systems | High-Performance Computing | NREL
investigate, build, and test models of complex phenomena or entire integrated systems-that cannot be directly observed or manipulated in the lab, or would be too expensive or time consuming. Models and visualizations
10 CFR 36.67 - Entering and leaving the radiation room.
Code of Federal Regulations, 2011 CFR
2011-01-01
... radiation room of a panoramic irradiator after an irradiation, the irradiator operator shall use a survey... irradiation, the irradiator operator shall: (1) Visually inspect the entire radiation room to verify that no...
10 CFR 36.67 - Entering and leaving the radiation room.
Code of Federal Regulations, 2010 CFR
2010-01-01
... radiation room of a panoramic irradiator after an irradiation, the irradiator operator shall use a survey... irradiation, the irradiator operator shall: (1) Visually inspect the entire radiation room to verify that no...
10 CFR 36.67 - Entering and leaving the radiation room.
Code of Federal Regulations, 2014 CFR
2014-01-01
... radiation room of a panoramic irradiator after an irradiation, the irradiator operator shall use a survey... irradiation, the irradiator operator shall: (1) Visually inspect the entire radiation room to verify that no...
10 CFR 36.67 - Entering and leaving the radiation room.
Code of Federal Regulations, 2012 CFR
2012-01-01
... radiation room of a panoramic irradiator after an irradiation, the irradiator operator shall use a survey... irradiation, the irradiator operator shall: (1) Visually inspect the entire radiation room to verify that no...
10 CFR 36.67 - Entering and leaving the radiation room.
Code of Federal Regulations, 2013 CFR
2013-01-01
... radiation room of a panoramic irradiator after an irradiation, the irradiator operator shall use a survey... irradiation, the irradiator operator shall: (1) Visually inspect the entire radiation room to verify that no...
Morphology and accommodative function of the vitreous zonule in human and monkey eyes.
Lütjen-Drecoll, Elke; Kaufman, Paul L; Wasielewski, Rainer; Ting-Li, Lin; Croft, Mary Ann
2010-03-01
To explore the attachments of the posterior zonule and vitreous in relation to accommodation and presbyopia in monkeys and humans. Novel scanning electron microscopy (SEM) and ultrasound biomicroscopy (UBM) techniques were used to visualize the anterior, intermediate, and posterior vitreous zonule and their connections to the ciliary body, vitreous membrane, lens capsule, and ora serrata, and to characterize their age-related changes and correlate them with loss of accommodative forward movement of the ciliary body. alpha-Chymotrypsin was used focally to lyse the vitreous zonule and determine the effect on movement of the accommodative apparatus in monkeys. The vitreous attached to the peripheral lens capsule and the ora serrata directly. The pars plana zonule and the posterior tines of the anterior zonule were separated from the vitreous membrane except for strategically placed attachments, collectively termed the vitreous zonule, that may modulate and smooth the forward and backward movements of the entire system. Age-dependent changes in these relationships correlated significantly with loss of accommodative amplitude. Lysis of the intermediate vitreous zonule partially restored accommodative movement. The vitreous zonule system may help to smoothly translate to the lens the driving forces of accommodation and disaccommodation generated by the ciliary muscle, while maintaining visual focus and protecting the lens capsule and ora serrata from acute tractional forces. Stiffening of the vitreous zonular system may contribute to age-related loss of accommodation and offer a therapeutic target for presbyopia.
Visualization study of flow in axial flow inducer.
NASA Technical Reports Server (NTRS)
Lakshminarayana, B.
1972-01-01
A visualization study of the flow through a three ft dia model of a four bladed inducer, which is operated in air at a flow coefficient of 0.065, is reported in this paper. The flow near the blade surfaces, inside the rotating passages, downstream and upstream of the inducer is visualized by means of smoke, tufts, ammonia filament, and lampblack techniques. Flow is found to be highly three dimensional, with appreciable radial velocity throughout the entire passage. The secondary flows observed near the hub and annulus walls agree with qualitative predictions obtained from the inviscid secondary flow theory.
Visual skills in airport-security screening.
McCarley, Jason S; Kramer, Arthur F; Wickens, Christopher D; Vidoni, Eric D; Boot, Walter R
2004-05-01
An experiment examined visual performance in a simulated luggage-screening task. Observers participated in five sessions of a task requiring them to search for knives hidden in x-ray images of cluttered bags. Sensitivity and response times improved reliably as a result of practice. Eye movement data revealed that sensitivity increases were produced entirely by changes in observers' ability to recognize target objects, and not by changes in the effectiveness of visual scanning. Moreover, recognition skills were in part stimulus-specific, such that performance was degraded by the introduction of unfamiliar target objects. Implications for screener training are discussed.
Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M
2015-09-01
The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.
Michalareas, Georgios; Vezoli, Julien; van Pelt, Stan; Schoffelen, Jan-Mathijs; Kennedy, Henry; Fries, Pascal
2016-01-01
Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral and dorsal stream visual areas are differentially affected by inter-areal influences in the alpha-beta band. PMID:26777277
Exploring MEDLINE Space with Random Indexing and Pathfinder Networks
Cohen, Trevor
2008-01-01
The integration of disparate research domains is a prerequisite for the success of the translational science initiative. MEDLINE abstracts contain content from a broad range of disciplines, presenting an opportunity for the development of methods able to integrate the knowledge they contain. Latent Semantic Analysis (LSA) and related methods learn human-like associations between terms from unannotated text. However, their computational and memory demands limits their ability to address a corpus of this size. Furthermore, visualization methods previously used in conjunction with LSA have limited ability to define the local structure of the associative networks LSA learns. This paper explores these issues by (1) processing the entire MEDLINE corpus using Random Indexing, a variant of LSA, and (2) exploring learned associations using Pathfinder Networks. Meaningful associations are inferred from MEDLINE, including a drug-disease association undetected by PUBMED search. PMID:18999236
Exploring MEDLINE space with random indexing and pathfinder networks.
Cohen, Trevor
2008-11-06
The integration of disparate research domains is a prerequisite for the success of the translational science initiative. MEDLINE abstracts contain content from a broad range of disciplines, presenting an opportunity for the development of methods able to integrate the knowledge they contain. Latent Semantic Analysis (LSA) and related methods learn human-like associations between terms from unannotated text. However, their computational and memory demands limits their ability to address a corpus of this size. Furthermore, visualization methods previously used in conjunction with LSA have limited ability to define the local structure of the associative networks LSA learns. This paper explores these issues by (1) processing the entire MEDLINE corpus using Random Indexing, a variant of LSA, and (2) exploring learned associations using Pathfinder Networks. Meaningful associations are inferred from MEDLINE, including a drug-disease association undetected by PUBMED search.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
Theories of Visual Rhetoric: Looking at the Human Genome.
ERIC Educational Resources Information Center
Rosner, Mary
2001-01-01
Considers how visuals are constructions that are products of a writer's interpretation with its own "power-laden agenda." Reviews the current approach taken by composition scholars, surveys richer interdisciplinary work on visuals, and (by using visuals connected with the Human Genome Project) models an analysis of visuals as rhetoric.…
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
Wolf, Michael A.; Waechter, David A.; Umbarger, C. John
1986-01-01
The disclosure is directed to a wristwatch dosimeter utilizing a CdTe detector, a microprocessor and an audio and/or visual alarm. The dosimeter is entirely housable with a conventional digital watch case having an additional aperture enabling the detector to receive radiation.
Reproducibility of visual acuity assessment in normal and low visual acuity.
Becker, Ralph; Teichler, Gunnar; Gräf, Michael
2007-01-01
To assess the reproducibility of measurements of visual acuity in both the upper and lower range of visual acuity. The retroilluminated ETDRS 1 and ETDRS 2 charts (Precision Vision) were used for measurement of visual acuity. Both charts use the same letters. The sequence of the charts followed a pseudorandomized protocol. The examination distance was 4.0 m. When the visual acuity was below 0.16 or 0.03, then the examination distance was reduced to 1 m or 0.4 m, respectively, using an appropriate near correction. Visual acuity measurements obtained during the same session with both charts were compared. A total of 100 patients (age 8-90 years; median 60.5) with various eye disorders, including 39 with amblyopia due to strabismus, were tested in addition to 13 healthy volunteers (age 18-33 years; median 24). At least 3 out of 5 optotypes per line had to be correctly identified to pass this line. Wrong answers were monitored. The interpolated logMAR score was calculated. In the patients, the eye with the lower visual acuity was assessed, and for the healthy subjects the right eye. Differences between ETDRS 1 and ETDRS 2-acuity were compared. The mean logMAR values for ETDRS 1 and ETDRS 2 were -0.17 and -0.14 in the healthy eyes and 0.55 and 0.57 in the entire group. The absolute difference between ETDRS 1 and ETDRS 2 was (mean +/- standard deviation) 0.051 +/- 0.04 for the healthy eyes and 0.063 +/- 0.05 in the entire group. In the acuity range below 0.1 (logMAR > 1.0), the absolute difference (mean +/- standard deviation) between ETDRS 1 and ETDRS 2 of 0.072 +/- 0.04 did not significantly exceed the mean absolute difference in healthy eyes (p = 0.17). Regression analysis (|ETDRS 1 - ETDRS 2| vs. ETDRS 1) showed a slight increase of the difference between the two values with lower visual acuity (p = 0.0505; r = 0.18). Assuming correct measurement, the reproducibilty of visual acuity measurements in the lower acuity range is not significantly worse than in normals.
Cheetham, Marcus; Suter, Pascal; Jäncke, Lutz
2011-01-01
The uncanny valley hypothesis (Mori, 1970) predicts differential experience of negative and positive affect as a function of human likeness. Affective experience of humanlike robots and computer-generated characters (avatars) dominates “uncanny” research, but findings are inconsistent. Importantly, it is unknown how objects are actually perceived along the hypothesis’ dimension of human likeness (DOH), defined in terms of human physical similarity. To examine whether the DOH can also be defined in terms of effects of categorical perception (CP), stimuli from morph continua with controlled differences in physical human likeness between avatar and human faces as endpoints were presented. Two behavioral studies found a sharp category boundary along the DOH and enhanced visual discrimination (i.e., CP) of fine-grained differences between pairs of faces at the category boundary. Discrimination was better for face pairs presenting category change in the human-to-avatar than avatar-to-human direction along the DOH. To investigate brain representation of physical change and category change along the DOH, an event-related functional magnetic resonance imaging study used the same stimuli in a pair-repetition priming paradigm. Bilateral mid-fusiform areas and a different right mid-fusiform area were sensitive to physical change within the human and avatar categories, respectively, whereas entirely different regions were sensitive to the human-to-avatar (caudate head, putamen, thalamus, red nucleus) and avatar-to-human (hippocampus, amygdala, mid-insula) direction of category change. These findings show that Mori’s DOH definition does not reflect subjective perception of human likeness and suggest that future “uncanny” studies consider CP and the DOH’s category structure in guiding experience of non-human objects. PMID:22131970
Wilkinson, Krista M; Light, Janice
2011-12-01
Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs. However, many VSDs omit human figures. In this study, the authors sought to describe the distribution of visual attention to humans in naturalistic scenes as compared with other elements. Nineteen college students observed 8 photographs in which a human figure appeared near 1 or more items that might be expected to compete for visual attention (such as a Christmas tree or a table loaded with food). Eye-tracking technology allowed precise recording of participants' gaze. The fixation duration over a 7-s viewing period and latency to view elements in the photograph were measured. Participants fixated on the human figures more rapidly and for longer than expected based on the size of these figures, regardless of the other elements in the scene. Human figures attract attention in a photograph even when presented alongside other attractive distracters. Results suggest that humans may be a powerful means to attract visual attention to key elements in VSDs.
Information visualization: Beyond traditional engineering
NASA Technical Reports Server (NTRS)
Thomas, James J.
1995-01-01
This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.
Effect of ethanol on human sleep EEG using correlation dimension analysis.
Kobayashi, Toshio; Madokoro, Shigeki; Wada, Yuji; Misaki, Kiwamu; Nakagawa, Hiroki
2002-01-01
Our study was designed to investigate the influence of alcohol on sleep using the correlation dimension (D2) analysis. Polysomnography (PSG) was performed in 10 adult human males during a baseline night (BL-N) and an ethanol (0.8 g/kg body weight) night (Et-N). The mean D2 values during the Et-N and BL-N decreased significantly from wakefulness to stages 1, 2, and 3+4 of nonrapid eye movement (non-REM) sleep, and increased during REM sleep. The mean D2 of the sleep electroencephalogram (EEG) during stage 2 during the Et-N was significantly higher than during BL-N. In addition, the mean D2 values of the sleep EEG for the second, third and fourth sleep cycles during the Et-N were significantly higher than during the BL-N. These significant differences between BL-N and Et-N were not recognized by spectral and visual analyses. Our results suggest that D2 is a potentially useful parameter for quantitative analysis of the effect of ethanol on sleep EEGs throughout the entire night. Copyright 2002 S. Karger AG, Basel
Döllinger, M; Rosanowski, F; Eysholdt, U; Lohscheller, J
2008-12-01
The understanding of normal and pathological vocal fold dynamics is the basis for a pathophysiological motivated voice therapy. Crucial vocal fold dynamics concerning voice production occur at the medial part of the vocal fold which is seen as the most critical region of mucosal wave propagation. Due to the limited size of the larynx the possibilities of laryngeal imaging by endoscopic techniques are limited. This work describes an experimental set-up that enables quantification of the entire medial and superior vocal fold surface using excised human and in vivo canine larynges. The data obtained enable analysis of vocal fold deflections, velocities, and mucosal wave propagation. The reciprocal dependencies can be examined and different areas of vocal fold dynamics located. The vertical components obscured in clinical endoscopy can be visualized. This is not negligible. In particular it is shown that the vertical deflection, which cannot be observed by clinical examination, plays an important part in the dynamics and therefore cannot be omitted for therapeutic procedures. The theoretically assumed entrainment and influence of the two main vibration modes enabling normal phonation is confirmed.
Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot.
Greer, Joseph D; Morimoto, Tania K; Okamura, Allison M; Hawkes, Elliot W
2017-01-01
We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot's pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds.
Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot
Greer, Joseph D.; Morimoto, Tania K.; Okamura, Allison M.; Hawkes, Elliot W.
2017-01-01
We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot’s pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds. PMID:29379672
SPRINT: ultrafast protein-protein interaction prediction of the entire human interactome.
Li, Yiwei; Ilie, Lucian
2017-11-15
Proteins perform their functions usually by interacting with other proteins. Predicting which proteins interact is a fundamental problem. Experimental methods are slow, expensive, and have a high rate of error. Many computational methods have been proposed among which sequence-based ones are very promising. However, so far no such method is able to predict effectively the entire human interactome: they require too much time or memory. We present SPRINT (Scoring PRotein INTeractions), a new sequence-based algorithm and tool for predicting protein-protein interactions. We comprehensively compare SPRINT with state-of-the-art programs on seven most reliable human PPI datasets and show that it is more accurate while running orders of magnitude faster and using very little memory. SPRINT is the only sequence-based program that can effectively predict the entire human interactome: it requires between 15 and 100 min, depending on the dataset. Our goal is to transform the very challenging problem of predicting the entire human interactome into a routine task. The source code of SPRINT is freely available from https://github.com/lucian-ilie/SPRINT/ and the datasets and predicted PPIs from www.csd.uwo.ca/faculty/ilie/SPRINT/ .
Preparation for the Implantation of an Intracortical Visual Prosthesis in a Human
2014-10-01
Intracortical Visual Prosthesis in a Human PRINCIPAL INVESTIGATOR: Philip R Troyk, PhD... Prosthesis in a Human 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-12-1-0394 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Philip R Troyk...visual prosthesis (ICVP) for testing in a human. No human trial testing of the prosthesis will occur under the funded work. Preparatory tasks include
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Ahmed, N; Zheng, Ziyi; Mueller, K
2012-12-01
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
Wolf, M.A.; Waechter, D.A.; Umbarger, C.J.
1982-04-16
The disclosure is directed to a wristwatch dosimeter utilizing a CdTe detector, a microprocessor and an audio and/or visual alarm. The dosimeter is entirely housable within a conventional digital watch case having an additional aperture enabling the detector to receive radiation.
Wolf, M.A.; Waechter, D.A.; Umbarger, C.J.
1986-08-26
The disclosure is directed to a wristwatch dosimeter utilizing a CdTe detector, a microprocessor and an audio and/or visual alarm. The dosimeter is entirely housable with a conventional digital watch case having an additional aperture enabling the detector to receive radiation. 10 figs.
The two-visual-systems hypothesis and the perspectival features of visual experience.
Foley, Robert T; Whitwell, Robert L; Goodale, Melvyn A
2015-09-01
Some critics of the two-visual-systems hypothesis (TVSH) argue that it is incompatible with the fundamentally egocentric nature of visual experience (what we call the 'perspectival account'). The TVSH proposes that the ventral stream, which delivers up our visual experience of the world, works in an allocentric frame of reference, whereas the dorsal stream, which mediates the visual control of action, uses egocentric frames of reference. Given that the TVSH is also committed to the claim that dorsal-stream processing does not contribute to the contents of visual experience, it has been argued that the TVSH cannot account for the egocentric features of our visual experience. This argument, however, rests on a misunderstanding about how the operations mediating action and the operations mediating perception are specified in the TVSH. In this article, we emphasize the importance of the 'outputs' of the two-systems to the specification of their respective operations. We argue that once this point is appreciated, it becomes evident that the TVSH is entirely compatible with a perspectival account of visual experience. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)
1998-01-01
When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.
Preparation for the Implantation of an Intracortical Visual Prosthesis in a Human
2013-10-01
Intracortical Visual Prosthesis in a Human PRINCIPAL INVESTIGATOR: Philip R Troyk, PhD... Prosthesis in a Human 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-12-1-0394 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Philip R Troyk, PhD...to prepare an intracortical visual prosthesis (ICVP) for testing in a human. No human trial testing of the prosthesis will occur under the funded
Functional mapping of the primate auditory system.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
2003-01-24
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
The relation between carbon monoxide emission and visual extinction in cloud L134
NASA Technical Reports Server (NTRS)
Tucker, K. D.; Dickman, R. L.; Encrenaz, P. J.; Kutner, M. L.
1976-01-01
Emission from the J = 1-0 transition of carbon monoxide has been mapped over an area of 40 by 55 arcmin in cloud L134, and visual extinctions over the entire cloud have been obtained by means of star counts. Line intensities of at least 2 K are observable down to an extinction level of about one magnitude. From observations of the J = 1-0 transition of the (C-13)O isotopic species at 18 locations in the cloud, a linear correlation is found between the local thermodynamic equilibrium (LTE) column densities of (C-13)O and magnitudes of visual extinction.
Visual Culture, Art History and the Humanities
ERIC Educational Resources Information Center
Castaneda, Ivan
2009-01-01
This essay will discuss the need for the humanities to address visual culture studies as part of its interdisciplinary mission in today's university. Although mostly unnoticed in recent debates in the humanities over historical and theoretical frameworks, the relatively new field of visual culture has emerged as a corrective to a growing…
The Human is the Loop: New Directions for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren
2014-01-28
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
Common Visual Preference for Curved Contours in Humans and Great Apes.
Munar, Enric; Gómez-Puerto, Gerardo; Call, Josep; Nadal, Marcos
2015-01-01
Among the visual preferences that guide many everyday activities and decisions, from consumer choices to social judgment, preference for curved over sharp-angled contours is commonly thought to have played an adaptive role throughout human evolution, favoring the avoidance of potentially harmful objects. However, because nonhuman primates also exhibit preferences for certain visual qualities, it is conceivable that humans' preference for curved contours is grounded on perceptual and cognitive mechanisms shared with extant nonhuman primate species. Here we aimed to determine whether nonhuman great apes and humans share a visual preference for curved over sharp-angled contours using a 2-alternative forced choice experimental paradigm under comparable conditions. Our results revealed that the human group and the great ape group indeed share a common preference for curved over sharp-angled contours, but that they differ in the manner and magnitude with which this preference is expressed behaviorally. These results suggest that humans' visual preference for curved objects evolved from earlier primate species' visual preferences, and that during this process it became stronger, but also more susceptible to the influence of higher cognitive processes and preference for other visual features.
McDonald, J Scott; Seymour, Kiley J; Schira, Mark M; Spehar, Branka; Clifford, Colin W G
2009-05-01
The responses of orientation-selective neurons in primate visual cortex can be profoundly affected by the presence and orientation of stimuli falling outside the classical receptive field. Our perception of the orientation of a line or grating also depends upon the context in which it is presented. For example, the perceived orientation of a grating embedded in a surround tends to be repelled from the predominant orientation of the surround. Here, we used fMRI to investigate the basis of orientation-specific surround effects in five functionally-defined regions of visual cortex: V1, V2, V3, V3A/LO1 and hV4. Test stimuli were luminance-modulated and isoluminant gratings that produced responses similar in magnitude. Less BOLD activation was evident in response to gratings with parallel versus orthogonal surrounds across all the regions of visual cortex investigated. When an isoluminant test grating was surrounded by a luminance-modulated inducer, the degree of orientation-specific contextual modulation was no larger for extrastriate areas than for V1, suggesting that the observed effects might originate entirely in V1. However, more orientation-specific modulation was evident in extrastriate cortex when both test and inducer were luminance-modulated gratings than when the test was isoluminant; this difference was significant in area V3. We suggest that the pattern of results in extrastriate cortex may reflect a refinement of the orientation-selectivity of surround suppression specific to the colour of the surround or, alternatively, processes underlying the segmentation of test and inducer by spatial phase or orientation when no colour cue is available.
2017-01-01
Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794
DOT National Transportation Integrated Search
2002-08-01
The sharpest distant focus is only within a one-degree cone. : Outside of a 10 cone, visual acuity drops 90%. : Scan the entire horizon, not just the sky in front of your aircraft. : You are 5 times more likely to have a midair collision with an ai...
The Functional Architecture of the Retina.
ERIC Educational Resources Information Center
Masland, Richard H.
1986-01-01
Examines research related to the retina's coding of visual input with emphasis on the organization of two kinds of ganglion cell receptive fields. Reviews current techniques for examining the shapes and arrangement in the retina of entire populations of nerve cells. (ML)
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
NASA Astrophysics Data System (ADS)
Hardin, D.; Graves, S.; Sever, T.; Irwin, D.
2005-05-01
In 2002 and 2003 NASA, the World Bank and the United States Agency for International Development (USAID) joined with the Central American Commission for Environment and Development (CCAD) to develop an advanced decision support system for Mesoamerica (named SERVIR). Mesoamerica - composed of the seven Central American countries and the five southernmost states of Mexico - makes up only a small fraction of the world's land surface. However, the region is home to approximately eight percent of the planet's biodiversity (14 biosphere reserves, 31 Ramsar sites, 8 world heritage sites, 589 protected areas) and 45 million people including more than 50 different ethnic groups. Mesoamerica's biological and cultural diversity are severely threatened by human impact and natural disasters including extensive deforestation, illegal logging, water pollution, slash and burn agriculture, earthquakes, hurricanes, drought, and volcanic eruption. NASA Marshall Space Flight Center (NASA/MSFC), together with the University of Alabama in Huntsville (UAH) and the SERVIR partners are developing state-of-the-art decision support tools for environmental monitoring as well as disaster prevention and mitigation in Mesoamerica. These partners are contributing expertise in space-based observation with information management technologies and intimate knowledge of local ecosystems to create a system that is being used by scientists, educators, and policy makers to monitor and forecast ecological changes, respond to natural disasters, and better understand both natural and human induced effects. The decision support and environmental monitoring data products are typically formatted as conventional two-dimensional, static and animated imagery. However, in addition to conventional data products and as a major portion of our research, we are employing commercial applications that generate three-dimensional interactive visualizations that allow data products to be viewed from multiple angles and at different scales. One of these is a 15 meter resolution mosaic of the entire Mesoamerican region. This paper gives an overview of the SERVIR project and its associated visualization methods.
Different Signal Enhancement Pathways of Attention and Consciousness Underlie Perception in Humans.
van Boxtel, Jeroen J A
2017-06-14
It is not yet known whether attention and consciousness operate through similar or largely different mechanisms. Visual processing mechanisms are routinely characterized by measuring contrast response functions (CRFs). In this report, behavioral CRFs were obtained in humans (both males and females) by measuring afterimage durations over the entire range of inducer stimulus contrasts to reveal visual mechanisms behind attention and consciousness. Deviations relative to the standard CRF, i.e., gain functions, describe the strength of signal enhancement, which were assessed for both changes due to attentional task and conscious perception. It was found that attention displayed a response-gain function, whereas consciousness displayed a contrast-gain function. Through model comparisons, which only included contrast-gain modulations, both contrast-gain and response-gain effects can be explained with a two-level normalization model, in which consciousness affects only the first level and attention affects only the second level. These results demonstrate that attention and consciousness can effectively show different gain functions because they operate through different signal enhancement mechanisms. SIGNIFICANCE STATEMENT The relationship between attention and consciousness is still debated. Mapping contrast response functions (CRFs) has allowed (neuro)scientists to gain important insights into the mechanistic underpinnings of visual processing. Here, the influence of both attention and consciousness on these functions were measured and they displayed a strong dissociation. First, attention lowered CRFs, whereas consciousness raised them. Second, attention manifests itself as a response-gain function, whereas consciousness manifests itself as a contrast-gain function. Extensive model comparisons show that these results are best explained in a two-level normalization model in which consciousness affects only the first level, whereas attention affects only the second level. These findings show dissociations between both the computational mechanisms behind attention and consciousness and the perceptual consequences that they induce. Copyright © 2017 the authors 0270-6474/17/375912-11$15.00/0.
Extraction and analysis of signatures from the Gene Expression Omnibus by the crowd
Wang, Zichen; Monteiro, Caroline D.; Jagodnik, Kathleen M.; Fernandez, Nicolas F.; Gundersen, Gregory W.; Rouillard, Andrew D.; Jenkins, Sherry L.; Feldmann, Axel S.; Hu, Kevin S.; McDermott, Michael G.; Duan, Qiaonan; Clark, Neil R.; Jones, Matthew R.; Kou, Yan; Goff, Troy; Woodland, Holly; Amaral, Fabio M R.; Szeto, Gregory L.; Fuchs, Oliver; Schüssler-Fiorenza Rose, Sophia M.; Sharma, Shvetank; Schwartz, Uwe; Bausela, Xabier Bengoetxea; Szymkiewicz, Maciej; Maroulis, Vasileios; Salykin, Anton; Barra, Carolina M.; Kruth, Candice D.; Bongio, Nicholas J.; Mathur, Vaibhav; Todoric, Radmila D; Rubin, Udi E.; Malatras, Apostolos; Fulp, Carl T.; Galindo, John A.; Motiejunaite, Ruta; Jüschke, Christoph; Dishuck, Philip C.; Lahl, Katharina; Jafari, Mohieddin; Aibar, Sara; Zaravinos, Apostolos; Steenhuizen, Linda H.; Allison, Lindsey R.; Gamallo, Pablo; de Andres Segura, Fernando; Dae Devlin, Tyler; Pérez-García, Vicente; Ma'ayan, Avi
2016-01-01
Gene expression data are accumulating exponentially in public repositories. Reanalysis and integration of themed collections from these studies may provide new insights, but requires further human curation. Here we report a crowdsourcing project to annotate and reanalyse a large number of gene expression profiles from Gene Expression Omnibus (GEO). Through a massive open online course on Coursera, over 70 participants from over 25 countries identify and annotate 2,460 single-gene perturbation signatures, 839 disease versus normal signatures, and 906 drug perturbation signatures. All these signatures are unique and are manually validated for quality. Global analysis of these signatures confirms known associations and identifies novel associations between genes, diseases and drugs. The manually curated signatures are used as a training set to develop classifiers for extracting similar signatures from the entire GEO repository. We develop a web portal to serve these signatures for query, download and visualization. PMID:27667448
A melanosomal two-pore sodium channel regulates pigmentation
Bellono, Nicholas W.; Escobar, Iliana E.; Oancea, Elena
2016-01-01
Intracellular organelles mediate complex cellular functions that often require ion transport across their membranes. Melanosomes are organelles responsible for the synthesis of the major mammalian pigment melanin. Defects in melanin synthesis result in pigmentation defects, visual deficits, and increased susceptibility to skin and eye cancers. Although genes encoding putative melanosomal ion transporters have been identified as key regulators of melanin synthesis, melanosome ion transport and its contribution to pigmentation remain poorly understood. Here we identify two-pore channel 2 (TPC2) as the first reported melanosomal cation conductance by directly patch-clamping skin and eye melanosomes. TPC2 has been implicated in human pigmentation and melanoma, but the molecular mechanism mediating this function was entirely unknown. We demonstrate that the vesicular signaling lipid phosphatidylinositol bisphosphate PI(3,5)P2 modulates TPC2 activity to control melanosomal membrane potential, pH, and regulate pigmentation. PMID:27231233
Extraction and analysis of signatures from the Gene Expression Omnibus by the crowd.
Wang, Zichen; Monteiro, Caroline D; Jagodnik, Kathleen M; Fernandez, Nicolas F; Gundersen, Gregory W; Rouillard, Andrew D; Jenkins, Sherry L; Feldmann, Axel S; Hu, Kevin S; McDermott, Michael G; Duan, Qiaonan; Clark, Neil R; Jones, Matthew R; Kou, Yan; Goff, Troy; Woodland, Holly; Amaral, Fabio M R; Szeto, Gregory L; Fuchs, Oliver; Schüssler-Fiorenza Rose, Sophia M; Sharma, Shvetank; Schwartz, Uwe; Bausela, Xabier Bengoetxea; Szymkiewicz, Maciej; Maroulis, Vasileios; Salykin, Anton; Barra, Carolina M; Kruth, Candice D; Bongio, Nicholas J; Mathur, Vaibhav; Todoric, Radmila D; Rubin, Udi E; Malatras, Apostolos; Fulp, Carl T; Galindo, John A; Motiejunaite, Ruta; Jüschke, Christoph; Dishuck, Philip C; Lahl, Katharina; Jafari, Mohieddin; Aibar, Sara; Zaravinos, Apostolos; Steenhuizen, Linda H; Allison, Lindsey R; Gamallo, Pablo; de Andres Segura, Fernando; Dae Devlin, Tyler; Pérez-García, Vicente; Ma'ayan, Avi
2016-09-26
Gene expression data are accumulating exponentially in public repositories. Reanalysis and integration of themed collections from these studies may provide new insights, but requires further human curation. Here we report a crowdsourcing project to annotate and reanalyse a large number of gene expression profiles from Gene Expression Omnibus (GEO). Through a massive open online course on Coursera, over 70 participants from over 25 countries identify and annotate 2,460 single-gene perturbation signatures, 839 disease versus normal signatures, and 906 drug perturbation signatures. All these signatures are unique and are manually validated for quality. Global analysis of these signatures confirms known associations and identifies novel associations between genes, diseases and drugs. The manually curated signatures are used as a training set to develop classifiers for extracting similar signatures from the entire GEO repository. We develop a web portal to serve these signatures for query, download and visualization.
Neuroscience thinks big (and collaboratively).
Kandel, Eric R; Markram, Henry; Matthews, Paul M; Yuste, Rafael; Koch, Christof
2013-09-01
Despite cash-strapped times for research, several ambitious collaborative neuroscience projects have attracted large amounts of funding and media attention. In Europe, the Human Brain Project aims to develop a large-scale computer simulation of the brain, whereas in the United States, the Brain Activity Map is working towards establishing a functional connectome of the entire brain, and the Allen Institute for Brain Science has embarked upon a 10-year project to understand the mouse visual cortex (the MindScope project). US President Barack Obama's announcement of the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies Initiative) in April 2013 highlights the political commitment to neuroscience and is expected to further foster interdisciplinary collaborations, accelerate the development of new technologies and thus fuel much needed medical advances. In this Viewpoint article, five prominent neuroscientists explain the aims of the projects and how they are addressing some of the questions (and criticisms) that have arisen.
Motivation and appraisal in perception of poorly specified speech.
Lidestam, Björn; Beskow, Jonas
2006-04-01
Normal-hearing students (n = 72) performed sentence, consonant, and word identification in either A (auditory), V (visual), or AV (audiovisual) modality. The auditory signal had difficult speech-to-noise relations. Talker (human vs. synthetic), topic (no cue vs. cue-words), and emotion (no cue vs. facially displayed vs. cue-words) were varied within groups. After the first block, effects of modality, face, topic, and emotion on initial appraisal and motivation were assessed. After the entire session, effects of modality on longer-term appraisal and motivation were assessed. The results from both assessments showed that V identification was more positively appraised than A identification. Correlations were tentatively interpreted such that evaluation of self-rated performance possibly depends on subjective standard and is reflected on motivation (if below subjective standard, AV group), or on appraisal (if above subjective standard, A group). Suggestions for further research are presented.
Carbon Nanotube Anodes Being Evaluated for Lithium Ion Batteries
NASA Technical Reports Server (NTRS)
Raffaelle, Ryne P.; Gennett, Tom; VanderWal, Randy L.; Hepp, Aloysius F.
2001-01-01
The NASA Glenn Research Center is evaluating the use of carbon nanotubes as anode materials for thin-film lithium-ion (Li) batteries. The motivation for this work lies in the fact that, in contrast to carbon black, directed structured nanotubes and nanofibers offer a superior intercalation media for Li-ion batteries. Carbon lamellas in carbon blacks are circumferentially oriented and block much of the particle interior, rendering much of the matrix useless as intercalation material. Nanofibers, on the other hand, can be grown so as to provide 100-percent accessibility of the entire carbon structure to intercalation. These tubes can be visualized as "rolled-up" sheets of carbon hexagons (see the following figure). One tube is approximately 1/10,000th the diameter of a human hair. In addition, the high accessibility of the structure confers a high mobility to ion-exchange processes, a fundamental for the batteries to respond dynamically because of intercalation.
Lindor, Ebony; Rinehart, Nicole; Fielding, Joanne
2018-05-22
Individuals with Autism Spectrum Disorder (ASD) often excel on visual search and crowding tasks; however, inconsistent findings suggest that this 'islet of ability' may not be characteristic of the entire spectrum. We examined whether performance on these tasks changed as a function of motor proficiency in children with varying levels of ASD symptomology. Children with high ASD symptomology outperformed all others on complex visual search tasks, but only if their motor skills were rated at, or above, age expectations. For the visual crowding task, children with high ASD symptomology and superior motor skills exhibited enhanced target discrimination, whereas those with high ASD symptomology but poor motor skills experienced deficits. These findings may resolve some of the discrepancies in the literature.
Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina
2017-05-01
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Capturing specific abilities as a window into human individuality: The example of face recognition
Wilmer, Jeremy B.; Germine, Laura; Chabris, Christopher F.; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken
2013-01-01
Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality. PMID:23428079
NASA Technical Reports Server (NTRS)
Krauzlis, R. J.; Stone, L. S.
1999-01-01
The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.
Spatial updating in human parietal cortex
NASA Technical Reports Server (NTRS)
Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.
2003-01-01
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.
NASA Astrophysics Data System (ADS)
Saldamli, Belma; Herzen, Julia; Beckmann, Felix; Tübel, Jutta; Schauwecker, Johannes; Burgkart, Rainer; Jürgens, Philipp; Zeilhofer, Hans-Florian; Sader, Robert; Müller, Bert
2008-08-01
Recently the importance of the third dimension in cell biology has been better understood, resulting in a re-orientation towards three-dimensional (3D) cultivation. Yet adequate tools for their morphological characterization have to be established. Synchrotron radiation-based micro computed tomography (SRμCT) allows visualizing such biological systems with almost isotropic micrometer resolution, non-destructively. We have applied SRμCT for studying the internal morphology of human osteoblast-derived, scaffold-free 3D cultures, termed histoids. Primary human osteoblasts, isolated from femoral neck spongy bone, were grown as 2D culture in non-mineralizing osteogenic medium until a rather thick, multi-cellular membrane was formed. This delicate system was intentionally released to randomly fold itself. The folded cell cultures were grown to histoids of cubic milli- or centimeter size in various combinations of mineralizing and non-mineralizing osteogenic medium for a total period of minimum 56 weeks. The SRμCT-measurements were performed in the absorption contrast mode at the beamlines BW 2 and W 2 (HASYLAB at DESY, Hamburg, Germany), operated by the GKSS-Research Center. To investigate the entire volume of interest several scans were performed under identical conditions and registered to obtain one single dataset of each sample. The histoids grown under different conditions exhibit similar external morphology of globular or ovoid shape. The SRμCT-examination revealed the distinctly different morphological structures inside the histoids. One obtains details of the histoids that permit to identify and select the most promising slices for subsequent histological characterization.
Hypomorphic mutations in TRNT1 cause retinitis pigmentosa with erythrocytic microcytosis
DeLuca, Adam P.; Whitmore, S. Scott; Barnes, Jenna; Sharma, Tasneem P.; Westfall, Trudi A.; Scott, C. Anthony; Weed, Matthew C.; Wiley, Jill S.; Wiley, Luke A.; Johnston, Rebecca M.; Schnieders, Michael J.; Lentz, Steven R.; Tucker, Budd A.; Mullins, Robert F.; Scheetz, Todd E.; Stone, Edwin M.; Slusarski, Diane C.
2016-01-01
Retinitis pigmentosa (RP) is a highly heterogeneous group of disorders characterized by degeneration of the retinal photoreceptor cells and progressive loss of vision. While hundreds of mutations in more than 100 genes have been reported to cause RP, discovering the causative mutations in many patients remains a significant challenge. Exome sequencing in an individual affected with non-syndromic RP revealed two plausibly disease-causing variants in TRNT1, a gene encoding a nucleotidyltransferase critical for tRNA processing. A total of 727 additional unrelated individuals with molecularly uncharacterized RP were completely screened for TRNT1 coding sequence variants, and a second family was identified with two members who exhibited a phenotype that was remarkably similar to the index patient. Inactivating mutations in TRNT1 have been previously shown to cause a severe congenital syndrome of sideroblastic anemia, B-cell immunodeficiency, recurrent fevers and developmental delay (SIFD). Complete blood counts of all three of our patients revealed red blood cell microcytosis and anisocytosis with only mild anemia. Characterization of TRNT1 in patient-derived cell lines revealed reduced but detectable TRNT1 protein, consistent with partial function. Suppression of trnt1 expression in zebrafish recapitulated several features of the human SIFD syndrome, including anemia and sensory organ defects. When levels of trnt1 were titrated, visual dysfunction was found in the absence of other phenotypes. The visual defects in the trnt1-knockdown zebrafish were ameliorated by the addition of exogenous human TRNT1 RNA. Our findings indicate that hypomorphic TRNT1 mutations can cause a recessive disease that is almost entirely limited to the retina. PMID:26494905
ERIC Educational Resources Information Center
Benoit, Gerald
2002-01-01
Discusses data mining (DM) and knowledge discovery in databases (KDD), taking the view that KDD is the larger view of the entire process, with DM emphasizing the cleaning, warehousing, mining, and visualization of knowledge discovery in databases. Highlights include algorithms; users; the Internet; text mining; and information extraction.…
Saccade preparation signals in the human frontal and parietal cortices
Curtis, Clayton E.; Connolly, Jason D.
2009-01-01
Our ability to prepare an action in advance allows us to respond to our environment quickly, accurately, and flexibly. Here, we used event-related fMRI to measure human brain activity while subjects maintained an active state of preparedness. At the beginning of each trial, subjects were instructed to prepare a pro- or anti-saccade to a visual cue that was continually present during a long and variable preparation interval, but to defer the saccade’s execution until a go signal. The deferred saccade task eliminated the mnemonic component inherent in memory-guided saccade tasks and placed the emphasis entirely on advance motor preparation. During the delay while subjects were in an active state of motor preparedness, BOLD signal in the frontal cortex showed: 1) a sustained elevation throughout the preparation interval; 2) a linear increase with increasing delay length; 3) a bias for contra- rather than ipsiversive movements; 4) greater activity when the specific metrics of the planned saccade were known compared to when they were not; 5) increased activity when the saccade was directed towards an internal versus an external representation (i.e., anti-cue location). These findings support the hypothesis that both the human frontal and parietal cortices are involved in the spatial selection and preparation of saccades. PMID:18032565
Gallivan, Jason P.; Johnsrude, Ingrid S.; Randall Flanagan, J.
2016-01-01
Object-manipulation tasks (e.g., drinking from a cup) typically involve sequencing together a series of distinct motor acts (e.g., reaching toward, grasping, lifting, and transporting the cup) in order to accomplish some overarching goal (e.g., quenching thirst). Although several studies in humans have investigated the neural mechanisms supporting the planning of visually guided movements directed toward objects (such as reaching or pointing), only a handful have examined how manipulatory sequences of actions—those that occur after an object has been grasped—are planned and represented in the brain. Here, using event-related functional MRI and pattern decoding methods, we investigated the neural basis of real-object manipulation using a delayed-movement task in which participants first prepared and then executed different object-directed action sequences that varied either in their complexity or final spatial goals. Consistent with previous reports of preparatory brain activity in non-human primates, we found that activity patterns in several frontoparietal areas reliably predicted entire action sequences in advance of movement. Notably, we found that similar sequence-related information could also be decoded from pre-movement signals in object- and body-selective occipitotemporal cortex (OTC). These findings suggest that both frontoparietal and occipitotemporal circuits are engaged in transforming object-related information into complex, goal-directed movements. PMID:25576538
Rethinking Visual Analytics for Streaming Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris
In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less
Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
ERIC Educational Resources Information Center
Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha
2011-01-01
The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…
Visual Graphics for Human Rights, Social Justice, Democracy and the Public Good
ERIC Educational Resources Information Center
Nanackchand, Vedant; Berman, Kim
2012-01-01
The value of human rights in a democratic South Africa is constantly threatened and often waived for nefarious reasons. We contend that the use of visual graphics among incoming university visual art students provides a mode of engagement that helps to inculcate awareness of human rights, social responsibility, and the public good in South African…
Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András
2017-07-01
The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Al-Badawi, Amer Hamad; Abdelhakim, Mohamad Amr Salah Eddin; Macky, Tamer Ahmed; Mortada, Hassan Aly
2018-04-30
To study anatomical and visual outcomes of pars plana vitrectomy (PPV) with non-fovea-sparing (entire) internal limiting membrane (ILM) peeling in eyes with myopic foveoschisis (MF). Prospective interventional case series of eyes undergoing PPV with entire ILM peeling for symptomatic MF. Preoperative spectral domain optical coherence tomography (SD - OCT) epiretinal membrane, anomalous posterior vitreous detachment, vitreoschisis and postoperative changes in SD-OCT central foveal thickness (CFT), ellipsoid zone defect, foveal detachment (FD), macular hole (MH) diameter (if present) and best-corrected visual acuity (BCVA) in logarithm of the minimum angle of resolution (logMAR). This study included 21 eyes (21 patients) with mean age 60.4±13.1, 15 females (71.4%). All patients achieved complete postoperative reattachment by SD-OCT (no FD) 6 months post vitrectomy, with no iatrogenic intraoperative or postoperative MH, and with significant improvement in final BCVA from 1.6±0.30 to1.0±0.2 logMAR, and in CFT from 918.2±311.4 to182.3±33.1 µm. Patients were subdivided into subgroup A: 11 eyes without MH; and subgroup B: 10 eyes with MH, the latter had significant improvement in MH diameter (p=0.005). Preoperative BCVA was a significant risk factor for visual gain, while preoperative FD and CFT were significant for CFT change. Vitrectomy with non-fovea-sparing (entire) ILM peeling resulted in a significant functional and anatomical improvement in eyes with MF with/without MH with no reported complications. Results are comparable to fovea-sparing ILM peeling. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Hoffmann, M B; Kaule, F; Grzeschik, R; Behrens-Baumann, W; Wolynski, B
2011-07-01
Since its initial introduction in the mid-1990 s, retinotopic mapping of the human visual cortex, based on functional magnetic resonance imaging (fMRI), has contributed greatly to our understanding of the human visual system. Multiple cortical visual field representations have been demonstrated and thus numerous visual areas identified. The organisation of specific areas has been detailed and the impact of pathophysiologies of the visual system on the cortical organisation uncovered. These results are based on investigations at a magnetic field strength of 3 Tesla or less. In a field-strength comparison between 3 and 7 Tesla, it was demonstrated that retinotopic mapping benefits from a magnetic field strength of 7 Tesla. Specifically, the visual areas can be mapped with high spatial resolution for a detailed analysis of the visual field maps. Applications of fMRI-based retinotopic mapping in ophthalmological research hold promise to further our understanding of plasticity in the human visual cortex. This is highlighted by pioneering studies in patients with macular dysfunction or misrouted optic nerves. © Georg Thieme Verlag KG Stuttgart · New York.
Haider, Bilal; Krause, Matthew R.; Duque, Alvaro; Yu, Yuguo; Touryan, Jonathan; Mazer, James A.; McCormick, David A.
2011-01-01
SUMMARY During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RSC) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RSC neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses. PMID:20152117
Modeling human comprehension of data visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie
This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less
2017-01-01
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553
Zhu, Lin L; Beauchamp, Michael S
2017-03-08
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.
NASA Technical Reports Server (NTRS)
Patterson, J. C., Jr.; Jordan, F. L., Jr.
1975-01-01
A recently proposed method of flow visualization was investigated at the National Aeronautics and Space Administration's Langley Research Center. This method of flow visualization is particularly applicable to the study of lift-induced wing tip vortices through which it is possible to record the entire life span of the vortex. To accomplish this, a vertical screen of smoke was produced perpendicular to the flight path and allowed to become stationary. A model was then driven through the screen of smoke producing the circular vortex motion made visible as the smoke was induced along the path taken by the flow and was recorded by highspeed motion pictures.
A short-wave infrared otoscope for middle ear disease diagnostics (Conference Presentation)
NASA Astrophysics Data System (ADS)
Carr, Jessica A.; Valdez, Tulio; Bruns, Oliver; Bawendi, Moungi
2016-02-01
Otitis media, a range of inflammatory conditions of the middle ear, is the second most common illness diagnosed in children. However, the diagnosis can be challenging, particularly in pediatric patients. Otitis media is commonly over-diagnosed and over-treated and has been identified as one of the primary factors in increased antibiotic resistance. We describe the development of a short-wave infrared (SWIR) otoscope for objective middle ear effusion diagnosis. The SWIR otoscope can unambiguously detect the presence of middle ear fluid based on its strong light absorption in the SWIR. This absorption causes a stark, visual contrast between the presence and absence of fluid behind the tympanic membrane. Additionally, when there is no middle ear fluid, the deeper tissue penetration of SWIR light allows the SWIR otoscope to better visualize middle ear anatomy through the tympanic membrane than is possible with visible light. We demonstrate that in healthy, adult human ears, SWIR otoscopy can image a range of middle ear anatomy, including landmarks of the entire ossicular chain, the promontory, the round window niche, and the chorda tympani. We suggest that SWIR otoscopy can provide valuable diagnostic information complementary to that provided by visible pneumotoscopy in the diagnosis of middle ear effusions, otitis media, and other maladies of the middle ear.
Mapping Topographic Structure in White Matter Pathways with Level Set Trees
Kent, Brian P.; Rinaldo, Alessandro; Yeh, Fang-Cheng; Verstynen, Timothy
2014-01-01
Fiber tractography on diffusion imaging data offers rich potential for describing white matter pathways in the human brain, but characterizing the spatial organization in these large and complex data sets remains a challenge. We show that level set trees–which provide a concise representation of the hierarchical mode structure of probability density functions–offer a statistically-principled framework for visualizing and analyzing topography in fiber streamlines. Using diffusion spectrum imaging data collected on neurologically healthy controls (N = 30), we mapped white matter pathways from the cortex into the striatum using a deterministic tractography algorithm that estimates fiber bundles as dimensionless streamlines. Level set trees were used for interactive exploration of patterns in the endpoint distributions of the mapped fiber pathways and an efficient segmentation of the pathways that had empirical accuracy comparable to standard nonparametric clustering techniques. We show that level set trees can also be generalized to model pseudo-density functions in order to analyze a broader array of data types, including entire fiber streamlines. Finally, resampling methods show the reliability of the level set tree as a descriptive measure of topographic structure, illustrating its potential as a statistical descriptor in brain imaging analysis. These results highlight the broad applicability of level set trees for visualizing and analyzing high-dimensional data like fiber tractography output. PMID:24714673
David, Nicole; R Schneider, Till; Vogeley, Kai; Engel, Andreas K
2011-10-01
Individuals suffering from autism spectrum disorders (ASD) often show a tendency for detail- or feature-based perception (also referred to as "local processing bias") instead of more holistic stimulus processing typical for unaffected people. This local processing bias has been demonstrated for the visual and auditory domains and there is evidence that multisensory processing may also be affected in ASD. Most multisensory processing paradigms used social-communicative stimuli, such as human speech or faces, probing the processing of simultaneously occuring sensory signals. Multisensory processing, however, is not limited to simultaneous stimulation. In this study, we investigated whether multisensory processing deficits in ASD persist when semantically complex but nonsocial stimuli are presented in succession. Fifteen adult individuals with Asperger syndrome and 15 control persons participated in a visual-audio priming task, which required the classification of sounds that were either primed by semantically congruent or incongruent preceding pictures of objects. As expected, performance on congruent trials was faster and more accurate compared with incongruent trials (crossmodal priming effect). The Asperger group, however, did not differ significantly from the control group. Our results do not support a general multisensory processing deficit, which is universal to the entire autism spectrum. Copyright © 2011, International Society for Autism Research, Wiley-Liss, Inc.
Edge compression techniques for visualization of dense directed graphs.
Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher
2013-12-01
We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.
Acquisition and visualization techniques for narrow spectral color imaging.
Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón
2013-06-01
This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.
Mission to Earth: LANDSAT Views the World. [Color imagery of the earth's surface
NASA Technical Reports Server (NTRS)
Short, N. M.; Lowman, P. D., Jr.; Freden, S. C.; Finch, W. A., Jr.
1976-01-01
The LANDSAT program and system is described. The entire global land surface of Earth is visualized in 400 color plates at a scale and resolution that specify natural land cultural features in man's familiar environments. A glossary is included.
Meaningful and Purposeful Practice
ERIC Educational Resources Information Center
Clementi, Donna
2014-01-01
This article describes a graphic, designed by Clementi and Terrill, the authors of "Keys to Planning for Learning" (2013), visually representing the components that contribute to meaningful and purposeful practice in learning a world language, practice that leads to greater proficiency. The entire graphic is centered around the letter…
Karpefors, Martin; Weatherall, James
2018-03-21
In contrast to efficacy, safety hypotheses of clinical trials are not always pre-specified, and therefore, the safety interpretation work of a trial tends to be more exploratory, often reactive, and the analysis more statistically and graphically challenging. We introduce a new means of visualizing the adverse event data across an entire clinical trial. The approach overcomes some of the current limitations of adverse event analysis and streamlines the way safety data can be explored, interpreted and analyzed. Using a phase II study, we describe and exemplify how the tendril plot effectively summarizes the time-resolved safety profile of two treatment arms in a single plot and how that can provide scientists with a trial safety overview that can support medical decision making. To our knowledge, the tendril plot is the only way to graphically show important treatment differences with preserved temporal information, across an entire clinical trial, in a single view.
ERIC Educational Resources Information Center
Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.
2012-01-01
Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…
ERIC Educational Resources Information Center
Stevens, J.A.
2005-01-01
Four experiments were completed to characterize the utilization of visual imagery and motor imagery during the mental representation of human action. In Experiment 1, movement time functions for a motor imagery human locomotion task conformed to a speed-accuracy trade-off similar to Fitts' Law, whereas those for a visual imagery object motion task…
Comparison of visual sensitivity to human and object motion in autism spectrum disorder.
Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie
2010-08-01
Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.
Denion, Eric; Hitier, Martin; Levieil, Eric; Mouriaux, Frédéric
2015-01-01
While convergent, the human orbit differs from that of non-human apes in that its lateral orbital margin is significantly more rearward. This rearward position does not obstruct the additional visual field gained through eye motion. This additional visual field is therefore considered to be wider in humans than in non-human apes. A mathematical model was designed to quantify this difference. The mathematical model is based on published computed tomography data in the human neuro-ocular plane (NOP) and on additional anatomical data from 100 human skulls and 120 non-human ape skulls (30 gibbons; 30 chimpanzees / bonobos; 30 orangutans; 30 gorillas). It is used to calculate temporal visual field eccentricity values in the NOP first in the primary position of gaze then for any eyeball rotation value in abduction up to 45° and any lateral orbital margin position between 85° and 115° relative to the sagittal plane. By varying the lateral orbital margin position, the human orbit can be made “non-human ape-like”. In the Pan-like orbit, the orbital margin position (98.7°) was closest to the human orbit (107.1°). This modest 8.4° difference resulted in a large 21.1° difference in maximum lateral visual field eccentricity with eyeball abduction (Pan-like: 115°; human: 136.1°). PMID:26190625
Evolution and the origin of the visual retinoid cycle in vertebrates.
Kusakabe, Takehiro G; Takimoto, Noriko; Jin, Minghao; Tsuda, Motoyuki
2009-10-12
Absorption of a photon by visual pigments induces isomerization of 11-cis-retinaldehyde (RAL) chromophore to all-trans-RAL. Since the opsins lacking 11-cis-RAL lose light sensitivity, sustained vision requires continuous regeneration of 11-cis-RAL via the process called 'visual cycle'. Protostomes and vertebrates use essentially different machinery of visual pigment regeneration, and the origin and early evolution of the vertebrate visual cycle is an unsolved mystery. Here we compare visual retinoid cycles between different photoreceptors of vertebrates, including rods, cones and non-visual photoreceptors, as well as between vertebrates and invertebrates. The visual cycle systems in ascidians, the closest living relatives of vertebrates, show an intermediate state between vertebrates and non-chordate invertebrates. The ascidian larva may use retinochrome-like opsin as the major isomerase. The entire process of the visual cycle can occur inside the photoreceptor cells with distinct subcellular compartmentalization, although the visual cycle components are also present in surrounding non-photoreceptor cells. The adult ascidian probably uses RPE65 isomerase, and trans-to-cis isomerization may occur in distinct cellular compartments, which is similar to the vertebrate situation. The complete transition to the sophisticated retinoid cycle of vertebrates may have required acquisition of new genes, such as interphotoreceptor retinoid-binding protein, and functional evolution of the visual cycle genes.
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
Visualization of the Construction of Ancient Roman Buildings in Ostia Using Point Cloud Data
NASA Astrophysics Data System (ADS)
Hori, Y.; Ogawa, T.
2017-02-01
The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we "skipped" many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.
Conscious visual memory with minimal attention.
Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F
2017-02-01
Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Estimation of bio-signal based on human motion for integrated visualization of daily-life.
Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko
2007-01-01
This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.
Visualization of N-body Simulations in Virtual Worlds
NASA Astrophysics Data System (ADS)
Knop, Robert A.; Ames, J.; Djorgovski, G.; Farr, W.; Hut, P.; Johnson, A.; McMillan, S.; Nakasone, A.; Vesperini, E.
2010-01-01
We report on work to use virtual worlds for visualizing the results of N-body calculations, on three levels. First, we have written a demonstration 3-body solver entirely in the scripting language of the popularly used virtual world Second Life. Second, we have written a physics module for the open source virtual world OpenSim that performs N-body calculations as the physics engine for the server, allowing natural 3-d visualization of the solution as the solution is being performed. Finally, we give an initial report on the potential use of virtual worlds to visualize calculations which have previously been performed, or which are being performed in other processes and reported to the virtual world server. This work has been performed as part of the Meta-Institute of Computational Astrophysics (MICA). http://www.mica-vw.org
Visual Environments for CFD Research
NASA Technical Reports Server (NTRS)
Watson, Val; George, Michael W. (Technical Monitor)
1994-01-01
This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.
The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex.
Self, Matthew W; Peters, Judith C; Possel, Jessy K; Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C; Roelfsema, Pieter R
2016-03-01
Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons' receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex.
A simpler primate brain: the visual system of the marmoset monkey
Solomon, Samuel G.; Rosa, Marcello G. P.
2014-01-01
Humans are diurnal primates with high visual acuity at the center of gaze. Although primates share many similarities in the organization of their visual centers with other mammals, and even other species of vertebrates, their visual pathways also show unique features, particularly with respect to the organization of the cerebral cortex. Therefore, in order to understand some aspects of human visual function, we need to study non-human primate brains. Which species is the most appropriate model? Macaque monkeys, the most widely used non-human primates, are not an optimal choice in many practical respects. For example, much of the macaque cerebral cortex is buried within sulci, and is therefore inaccessible to many imaging techniques, and the postnatal development and lifespan of macaques are prohibitively long for many studies of brain maturation, plasticity, and aging. In these and several other respects the marmoset, a small New World monkey, represents a more appropriate choice. Here we review the visual pathways of the marmoset, highlighting recent work that brings these advantages into focus, and identify where additional work needs to be done to link marmoset brain organization to that of macaques and humans. We will argue that the marmoset monkey provides a good subject for studies of a complex visual system, which will likely allow an important bridge linking experiments in animal models to humans. PMID:25152716
The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex
Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C.; Roelfsema, Pieter R.
2016-01-01
Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons’ receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex. PMID:27015604
Response to 'pervasive sequence patents cover the entire human genome' - authors' reply.
Rosenfeld, Jeffrey; Mason, Christopher
2014-01-01
An author reply to the Letter to the Editor from Tu et al. regarding Pervasive sequence patents cover the entire human genome by J Rosenfeld and C Mason. Genome Med 2013, 5:27. See related Correspondence by Rosenfeld and Mason, http://genomemedicine.com/content/5/3/27, and related letter by Tu et al., http://genomemedicine.com/content/6/2/14.
Human Factors Assessment of Vibration Effects on Visual Performance During Launch
NASA Technical Reports Server (NTRS)
Holden, Kritina
2009-01-01
The Human Factors Assessment of Vibration Effects on Visual Performance During Launch (Visual Performance) investigation will determine visual performance limits during operational vibration and g-loads on the Space Shuttle, specifically through the determination of minimum readable font size during ascent using planned Orion display formats. Research Summary: The aim of the Human Factors Assessment of Vibration Effects on Visual Performance during Launch (Visual Performance) investigation is to provide supplementary data to that collected by the Thrust Oscillation Seat Detailed Technical Objective (DTO) 695 (Crew Seat DTO) which will measure seat acceleration and vibration from one flight deck and two middeck seats during ascent. While the Crew Seat DTO data alone are important in terms of providing a measure of vibration and g-loading, human performance data are required to fully interpret the operational consequences of the vibration values collected during Space Shuttle ascent. During launch, crewmembers will be requested to view placards with varying font sizes and indicate the minimum readable size. In combination with the Crew Seat DTO, the Visual Performance investigation will: Provide flight-validated evidence that will be used to establish vibration limits for visual performance during combined vibration and linear g-loading. o Provide flight data as inputs to ongoing ground-based simulations, which will further validate crew visual performance under vibration loading in a controlled environment. o Provide vibration and performance metrics to help validate procedures for ground tests and analyses of seats, suits, displays and controls, and human-in-the-loop performance.
Overview of Human-Centric Space Situational Awareness Science and Technology
2012-09-01
AGI), the developers of Satellite Tool Kit ( STK ), has provided demonstrations of innovative SSA visualization concepts that take advantage of the...needs inherent with SSA. RH has conducted CTAs and developed work-centered human-computer interfaces, visualizations , and collaboration technologies...all end users. RH’s Battlespace Visualization Branch researches methods to exploit the visual channel primarily to improve decision making and
ERIC Educational Resources Information Center
Wilkinson, Krista M.; Light, Janice
2011-01-01
Purpose: Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs.…
Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.
Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint
2017-09-13
GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.
Adaptive Modeling Language and Its Derivatives
NASA Technical Reports Server (NTRS)
Chemaly, Adel
2006-01-01
Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
Relating Standardized Visual Perception Measures to Simulator Visual System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Sweet, Barbara T.
2013-01-01
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
Art, Illusion and the Visual System.
ERIC Educational Resources Information Center
Livingstone, Margaret S.
1988-01-01
Describes the three part system of human vision. Explores the anatomical arrangement of the vision system from the eyes to the brain. Traces the path of various visual signals to their interpretations by the brain. Discusses human visual perception and its implications in art and design. (CW)
Mönter, Vera M; Crabb, David P; Artes, Paul H
2017-02-01
Peripheral vision is important for mobility, balance, and guidance of attention, but standard perimetry examines only <20% of the entire visual field. We report on the relation between central and peripheral visual field damage, and on retest variability, with a simple approach for automated kinetic perimetry (AKP) of the peripheral field. Thirty patients with glaucoma (median age 68, range 59-83 years; median Mean Deviation -8.0, range -16.3-0.1 dB) performed AKP and static automated perimetry (SAP) (German Adaptive Threshold Estimation strategy, 24-2 test). Automated kinetic perimetry consisted of a fully automated measurement of a single isopter (III.1.e). Central and peripheral visual fields were measured twice on the same day. Peripheral and central visual fields were only moderately related (Spearman's ρ, 0.51). Approximately 90% of test-retest differences in mean isopter radius were < ±4 deg. Relative to the range of measurements in this sample, the retest variability of AKP was similar to that of SAP. Patients with similar central visual field loss can have strikingly different peripheral visual fields, and therefore measuring the peripheral visual field may add clinically valuable information.
Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency
Sripati, Arun P.; Olson, Carl R.
2010-01-01
Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054
Pietersen, Alexander N.J.; Cheong, Soon Keen; Munn, Brandon; Gong, Pulin; Solomon, Samuel G.
2017-01-01
Key points How parallel are the primate visual pathways? In the present study, we demonstrate that parallel visual pathways in the dorsal lateral geniculate nucleus (LGN) show distinct patterns of interaction with rhythmic activity in the primary visual cortex (V1).In the V1 of anaesthetized marmosets, the EEG frequency spectrum undergoes transient changes that are characterized by fluctuations in delta‐band EEG power.We show that, on multisecond timescales, spiking activity in an evolutionary primitive (koniocellular) LGN pathway is specifically linked to these slow EEG spectrum changes. By contrast, on subsecond (delta frequency) timescales, cortical oscillations can entrain spiking activity throughout the entire LGN.Our results are consistent with the hypothesis that, in waking animals, the koniocellular pathway selectively participates in brain circuits controlling vigilance and attention. Abstract The major afferent cortical pathway in the visual system passes through the dorsal lateral geniculate nucleus (LGN), where nerve signals originating in the eye can first interact with brain circuits regulating visual processing, vigilance and attention. In the present study, we investigated how ongoing and visually driven activity in magnocellular (M), parvocellular (P) and koniocellular (K) layers of the LGN are related to cortical state. We recorded extracellular spiking activity in the LGN simultaneously with local field potentials (LFP) in primary visual cortex, in sufentanil‐anaesthetized marmoset monkeys. We found that asynchronous cortical states (marked by low power in delta‐band LFPs) are linked to high spike rates in K cells (but not P cells or M cells), on multisecond timescales. Cortical asynchrony precedes the increases in K cell spike rates by 1–3 s, implying causality. At subsecond timescales, the spiking activity in many cells of all (M, P and K) classes is phase‐locked to delta waves in the cortical LFP, and more cells are phase‐locked during synchronous cortical states than during asynchronous cortical states. The switch from low‐to‐high spike rates in K cells does not degrade their visual signalling capacity. By contrast, during asynchronous cortical states, the fidelity of visual signals transmitted by K cells is improved, probably because K cell responses become less rectified. Overall, the data show that slow fluctuations in cortical state are selectively linked to K pathway spiking activity, whereas delta‐frequency cortical oscillations entrain spiking activity throughout the entire LGN, in anaesthetized marmosets. PMID:28116750
A Probabilistic Palimpsest Model of Visual Short-term Memory
Matthey, Loic; Bays, Paul M.; Dayan, Peter
2015-01-01
Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ. PMID:25611204
A probabilistic palimpsest model of visual short-term memory.
Matthey, Loic; Bays, Paul M; Dayan, Peter
2015-01-01
Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ.
HPV strain distribution in patients with genital warts in a female population sample.
Boda, Daniel; Neagu, Monica; Constantin, Carolina; Voinescu, Razvan Nicolae; Caruntu, Constantin; Zurac, Sabina; Spandidos, Demetrios A; Drakoulis, Nikolaos; Tsoukalas, Dimitrios; Tsatsakis, Aristides M
2016-09-01
The incidence of human papillomavirus (HPV) in the human cancer domain is still a subject of intensive study. In this study, we examined cervical swab samples from 713 females with genital warts, and tested the samples for high- and low-risk genital HPV. HPV genotyping was assessed using a Genotyping test that detects HPV by the amplification of target DNA using polymerase chain reaction and nucleic acid hybridization. In total, we detected 37 anogenital HPV DNA genotypes [6, 11, 16, 18, 26, 31, 33, 35, 39, 40, 42, 45, 51, 52, 53, 54, 55, 56, 58, 59, 61, 62, 64, 66, 67, 68, 69, 70, 71, 72, 73 (MM9), 81, 82 (MM4), 83 (MM7), 84 (MM8), IS39 and CP6108] and investigated the incidence of these genotypes in the patients with genital warts. We found differences in the distribution of high-/low-risk strains and the incidence of high-risk strains was found to occur mainly in females under 35 years of age. The data from our study suggest that a detailed oral, rectal and genital identification of high-risk strains should be performed to visualize the entire pattern of possible triggers of carcinogenesis.
Human Factors Engineering Program Review Model
2004-02-01
Institute, 1993). ANSI HFS-100: American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (American National... American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI HFS-100-1988). Santa Monica, California
NASA Astrophysics Data System (ADS)
Jones, P. W.; Strelitz, R. A.
2012-12-01
The output of a simulation is best comprehended through the agency and methods of visualization, but a vital component of good science is knowledge of uncertainty. While great strides have been made in the quantification of uncertainty, especially in simulation, there is still a notable gap: there is no widely accepted means of simultaneously viewing the data and the associated uncertainty in one pane. Visualization saturates the screen, using the full range of color, shadow, opacity and tricks of perspective to display even a single variable. There is no room in the visualization expert's repertoire left for uncertainty. We present a method of visualizing uncertainty without sacrificing the clarity and power of the underlying visualization that works as well in 3-D and time-varying visualizations as it does in 2-D. At its heart, it relies on a principal tenet of continuum mechanics, replacing the notion of value at a point with a more diffuse notion of density as a measure of content in a region. First, the uncertainties calculated or tabulated at each point are transformed into a piecewise continuous field of uncertainty density . We next compute a weighted Voronoi tessellation of a user specified N convex polygonal/polyhedral cells such that each cell contains the same amount of uncertainty as defined by . The problem thus devolves into minimizing . Computation of such a spatial decomposition is O(N*N ), and can be computed iteratively making it possible to update easily over time as well as faster. The polygonal mesh does not interfere with the visualization of the data and can be easily toggled on or off. In this representation, a small cell implies a great concentration of uncertainty, and conversely. The content weighted polygons are identical to the cartogram familiar to the information visualization community in the depiction of things voting results per stat. Furthermore, one can dispense with the mesh or edges entirely to be replaced by symbols or glyphs at the generating points (effectively the center of the polygon). This methodology readily admits to rigorous statistical analysis using standard components found in R and thus entirely compatible with the visualization package we use (Visit and/or ParaView), the language we use (Python) and the UVCDAT environment that provides the programmer and analyst workbench. We will demonstrate the power and effectiveness of this methodology in climate studies. We will further argue that our method of defining (or predicting) values in a region has many advantages over the traditional visualization notion of value at a point.
Attractive Flicker--Guiding Attention in Dynamic Narrative Visualizations.
Waldner, Manuela; Le Muzic, Mathieu; Bernhard, Matthias; Purgathofer, Werner; Viola, Ivan
2014-12-01
Focus+context techniques provide visual guidance in visualizations by giving strong visual prominence to elements of interest while the context is suppressed. However, finding a visual feature to enhance for the focus to pop out from its context in a large dynamic scene, while leading to minimal visual deformation and subjective disturbance, is challenging. This paper proposes Attractive Flicker, a novel technique for visual guidance in dynamic narrative visualizations. We first show that flicker is a strong visual attractor in the entire visual field, without distorting, suppressing, or adding any scene elements. The novel aspect of our Attractive Flicker technique is that it consists of two signal stages: The first "orientation stage" is a short but intensive flicker stimulus to attract the attention to elements of interest. Subsequently, the intensive flicker is reduced to a minimally disturbing luminance oscillation ("engagement stage") as visual support to keep track of the focus elements. To find a good trade-off between attraction effectiveness and subjective annoyance caused by flicker, we conducted two perceptual studies to find suitable signal parameters. We showcase Attractive Flicker with the parameters obtained from the perceptual statistics in a study of molecular interactions. With Attractive Flicker, users were able to easily follow the narrative of the visualization on a large display, while the flickering of focus elements was not disturbing when observing the context.
Digital Images and Human Vision
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1997-01-01
Processing of digital images destined for visual consumption raises many interesting questions regarding human visual sensitivity. This talk will survey some of these questions, including some that have been answered and some that have not. There will be an emphasis upon visual masking, and a distinction will be drawn between masking due to contrast gain control processes, and due to processes such as hypothesis testing, pattern recognition, and visual search.
The Anatomical and Functional Organization of the Human Visual Pulvinar
Pinsk, Mark A.; Kastner, Sabine
2015-01-01
The pulvinar is the largest nucleus in the primate thalamus and contains extensive, reciprocal connections with visual cortex. Although the anatomical and functional organization of the pulvinar has been extensively studied in old and new world monkeys, little is known about the organization of the human pulvinar. Using high-resolution functional magnetic resonance imaging at 3 T, we identified two visual field maps within the ventral pulvinar, referred to as vPul1 and vPul2. Both maps contain an inversion of contralateral visual space with the upper visual field represented ventrally and the lower visual field represented dorsally. vPul1 and vPul2 border each other at the vertical meridian and share a representation of foveal space with iso-eccentricity lines extending across areal borders. Additional, coarse representations of contralateral visual space were identified within ventral medial and dorsal lateral portions of the pulvinar. Connectivity analyses on functional and diffusion imaging data revealed a strong distinction in thalamocortical connectivity between the dorsal and ventral pulvinar. The two maps in the ventral pulvinar were most strongly connected with early and extrastriate visual areas. Given the shared eccentricity representation and similarity in cortical connectivity, we propose that these two maps form a distinct visual field map cluster and perform related functions. The dorsal pulvinar was most strongly connected with parietal and frontal areas. The functional and anatomical organization observed within the human pulvinar was similar to the organization of the pulvinar in other primate species. SIGNIFICANCE STATEMENT The anatomical organization and basic response properties of the visual pulvinar have been extensively studied in nonhuman primates. Yet, relatively little is known about the functional and anatomical organization of the human pulvinar. Using neuroimaging, we found multiple representations of visual space within the ventral human pulvinar and extensive topographically organized connectivity with visual cortex. This organization is similar to other nonhuman primates and provides additional support that the general organization of the pulvinar is consistent across the primate phylogenetic tree. These results suggest that the human pulvinar, like other primates, is well positioned to regulate corticocortical communication. PMID:26156987
A comparative psychophysical approach to visual perception in primates.
Matsuno, Toyomi; Fujita, Kazuo
2009-04-01
Studies on the visual processing of primates, which have well developed visual systems, provide essential information about the perceptual bases of their higher-order cognitive abilities. Although the mechanisms underlying visual processing are largely shared between human and nonhuman primates, differences have also been reported. In this article, we review psychophysical investigations comparing the basic visual processing that operates in human and nonhuman species, and discuss the future contributions potentially deriving from such comparative psychophysical approaches to primate minds.
Smart in Everything Except School.
ERIC Educational Resources Information Center
Getman, G. N.
This book focuses on the prevention of academic failure through focus on developmental processes (especially development of essential visual skills) within the individual learner. A distinction is made between sight and vision with vision involving the entire person and his/her learning experiences The first chapter examines "The Dynamics of the…
46 CFR 61.20-18 - Examination requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... fitted) and propeller designed in accordance with American Bureau of Shipping standards to reduce stress... visual inspection of the entire shaft. (c) On tailshafts with a propeller fitted to the shaft by means of a coupling flange, the flange, the fillet at the propeller end, and each coupling bolt must be...
46 CFR 61.20-18 - Examination requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... fitted) and propeller designed in accordance with American Bureau of Shipping standards to reduce stress... visual inspection of the entire shaft. (c) On tailshafts with a propeller fitted to the shaft by means of a coupling flange, the flange, the fillet at the propeller end, and each coupling bolt must be...
46 CFR 61.20-18 - Examination requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... fitted) and propeller designed in accordance with American Bureau of Shipping standards to reduce stress... visual inspection of the entire shaft. (c) On tailshafts with a propeller fitted to the shaft by means of a coupling flange, the flange, the fillet at the propeller end, and each coupling bolt must be...
46 CFR 61.20-18 - Examination requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... fitted) and propeller designed in accordance with American Bureau of Shipping standards to reduce stress... visual inspection of the entire shaft. (c) On tailshafts with a propeller fitted to the shaft by means of a coupling flange, the flange, the fillet at the propeller end, and each coupling bolt must be...
46 CFR 61.20-18 - Examination requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... fitted) and propeller designed in accordance with American Bureau of Shipping standards to reduce stress... visual inspection of the entire shaft. (c) On tailshafts with a propeller fitted to the shaft by means of a coupling flange, the flange, the fillet at the propeller end, and each coupling bolt must be...
Graphic Novels in the Classroom
ERIC Educational Resources Information Center
Martin, Adam
2009-01-01
Today many authors and artists adapt works of classic literature into a medium more "user friendly" to the increasingly visual student population. Stefan Petrucha and Kody Chamberlain's version of "Beowulf" is one example. The graphic novel captures the entire epic in arresting images and contrasts the darkness of the setting and characters with…
USDA-ARS?s Scientific Manuscript database
Anoplophora glabripennis has a complex suite of mate-finding behaviors, the functions of which are not entirely understood. These behaviors are elicited by a number of factors, including visual and chemical cues. Chemical cues include a male-produced volatile semiochemical acting as a long-range sex...
The Arts and the Inner Lives of Teachers.
ERIC Educational Resources Information Center
Powell, Mary Clare
1997-01-01
Creative Arts in Learning, a master's degree program at Lesley College Graduate School, acknowledges the importance of teacher creativity. By feeding teachers' inner lives, the arts can transform the tone of classrooms or entire schools. Courses in storytelling, visual arts, and drama help teachers demystify the arts, learn alternative…
Response to ‘pervasive sequence patents cover the entire human genome’ - authors’ reply
2014-01-01
An author reply to the Letter to the Editor from Tu et al. regarding Pervasive sequence patents cover the entire human genome by J Rosenfeld and C Mason. Genome Med 2013, 5:27. See related Correspondence by Rosenfeld and Mason, http://genomemedicine.com/content/5/3/27, and related letter by Tu et al., http://genomemedicine.com/content/6/2/14 PMID:24764495
National Laboratory for Advanced Scientific Visualization at UNAM - Mexico
NASA Astrophysics Data System (ADS)
Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo
2016-04-01
In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.
Modification of visual function by early visual experience.
Blakemore, C
1976-07-01
Physiological experiments, involving recording from the visual cortex in young kittens and monkeys, have given new insight into human developmental disorders. In the visual cortex of normal cats and monkeys most neurones are selectively sensitive to the orientation of moving edges and they receive very similar signals from both eyes. Even in very young kittens without visual experience, most neurones are binocularly driven and a small proportion of them are genuinely orientation selective. There is no passive maturation of the system in the absence of visual experience, but even very brief exposure to patterned images produces rapid emergence of the adult organization. These results are compared to observations on humans who have "recovered" from early blindness. Covering one eye in a kitten or a monkey, during a sensitive period early in life, produces a virtually complete loss of input from that eye in the cortex. These results can be correlated with the production of "stimulus deprivation amblyopia" in infants who have had one eye patched. Induction of a strabismus causes a loss of binocularity in the visual cortex, and in humans it leads to a loss of stereoscopic vision and binocular fusion. Exposing kittens to lines of one orientation modifies the preferred orientations of cortical cells and there is an analogous "meridional amblyopia" in astigmatic humans. The existence of a sensitive period in human vision is discussed, as well as the possibility of designing remedial and preventive treatments for human developmental disorders.
LSSGalPy: Interactive Visualization of the Large-scale Environment Around Galaxies
NASA Astrophysics Data System (ADS)
Argudo-Fernández, M.; Duarte Puertas, S.; Ruiz, J. E.; Sabater, J.; Verley, S.; Bergond, G.
2017-05-01
New tools are needed to handle the growth of data in astrophysics delivered by recent and upcoming surveys. We aim to build open-source, light, flexible, and interactive software designed to visualize extensive three-dimensional (3D) tabular data. Entirely written in the Python language, we have developed interactive tools to browse and visualize the positions of galaxies in the universe and their positions with respect to its large-scale structures (LSS). Motivated by a previous study, we created two codes using Mollweide projection and wedge diagram visualizations, where survey galaxies can be overplotted on the LSS of the universe. These are interactive representations where the visualizations can be controlled by widgets. We have released these open-source codes that have been designed to be easily re-used and customized by the scientific community to fulfill their needs. The codes are adaptable to other kinds of 3D tabular data and are robust enough to handle several millions of objects. .
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey
State-of-the-art visual analytics models and frameworks mostly assume a static snapshot of the data, while in many cases it is a stream with constant updates and changes. Exploration of streaming data poses unique challenges as machine-level computations and abstractions need to be synchronized with the visual representation of the data and the temporally evolving human insights. In the visual analytics literature, we lack a thorough characterization of streaming data and analysis of the challenges associated with task abstraction, visualization design, and adaptation of the role of human-in-the-loop for exploration of data streams. We aim to fill this gap by conductingmore » a survey of the state-of-the-art in visual analytics of streaming data for systematically describing the contributions and shortcomings of current techniques and analyzing the research gaps that need to be addressed in the future. Our contributions are: i) problem characterization for identifying challenges that are unique to streaming data analysis tasks, ii) a survey and analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space for dynamic data and the role of the human-in-the-loop, and iii) reflections on the design-trade-offs for streaming visual analytics techniques and their practical applicability in real-world application scenarios.« less
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Accessing Earth Science Data Visualizations through NASA GIBS & Worldview
NASA Astrophysics Data System (ADS)
Cechini, M. F.; Boller, R. A.; Baynes, K.; Wong, M. M.; King, B. A.; Schmaltz, J. E.; De Luca, A. P.; King, J.; Roberts, J. T.; Rodriguez, J.; Thompson, C. K.; Pressley, N. N.
2017-12-01
For more than 20 years, the NASA Earth Observing System (EOS) has operated dozens of remote sensing satellites collecting nearly 15 Petabytes of data that span thousands of science parameters. Within these observations are keys the Earth Scientists have used to unlock many things that we understand about our planet. Also contained within these observations are a myriad of opportunities for learning and education. The trick is making them accessible to educators and students in convenient and simple ways so that effort can be spent on lesson enrichment and not overcoming technical hurdles. The NASA Global Imagery Browse Services (GIBS) system and NASA Worldview website provide a unique view into EOS data through daily full resolution visualizations of hundreds of earth science parameters. For many of these parameters, visualizations are available within hours of acquisition from the satellite. For others, visualizations are available for the entire mission of the satellite. Accompanying the visualizations are visual aids such as color legends, place names, and orbit tracks. By using these visualizations, educators and students can observe natural phenomena that enrich a scientific education. This poster will provide an overview of the visualizations available in NASA GIBS and Worldview and how they are accessed. We invite discussion on how the visualizations can be used or improved for educational purposes.
NASA Technical Reports Server (NTRS)
Holmes, B. J.; Gall, P. D.; Croom, C. C.; Manuel, G. S.; Kelliher, W. C.
1986-01-01
The visualization of laminar to turbulent boundary layer transition plays an important role in flight and wind-tunnel aerodynamic testing of aircraft wing and body surfaces. Visualization can help provide a more complete understanding of both transition location as well as transition modes; without visualization, the transition process can be very difficult to understand. In the past, the most valuable transition visualization methods for flight applications included sublimating chemicals and oil flows. Each method has advantages and limitations. In particular, sublimating chemicals are impractical to use in subsonic applications much above 20,000 feet because of the greatly reduced rates of sublimation at lower temperatures (less than -4 degrees Farenheit). Both oil flow and sublimating chemicals have the disadvantage of providing only one good data point per flight. Thus, for many important flight conditions, transition visualization has not been readily available. This paper discusses a new method for visualizing transition in flight by the use of liquid crystals. The new method overcomes the limitations of past techniques, and provides transition visualization capability throughout almost the entire altitude and speed ranges of virtually all subsonic aircraft flight envelopes. The method also has wide applicability for supersonic transition visualization in flight and for general use in wind tunnel research over wide subsonic and supersonic speed ranges.
Subjective and objective evaluation of visual fatigue on viewing 3D display continuously
NASA Astrophysics Data System (ADS)
Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang
2015-03-01
In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
Kelley, James J; Maor, Shay; Kim, Min Kyung; Lane, Anatoliy; Lun, Desmond S
2017-08-15
Visualization of metabolites, reactions and pathways in genome-scale metabolic networks (GEMs) can assist in understanding cellular metabolism. Three attributes are desirable in software used for visualizing GEMs: (i) automation, since GEMs can be quite large; (ii) production of understandable maps that provide ease in identification of pathways, reactions and metabolites; and (iii) visualization of the entire network to show how pathways are interconnected. No software currently exists for visualizing GEMs that satisfies all three characteristics, but MOST-Visualization, an extension of the software package MOST (Metabolic Optimization and Simulation Tool), satisfies (i), and by using a pre-drawn overview map of metabolism based on the Roche map satisfies (ii) and comes close to satisfying (iii). MOST is distributed for free on the GNU General Public License. The software and full documentation are available at http://most.ccib.rutgers.edu/. dslun@rutgers.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Orssaud, C
2014-06-01
Amblyopia is a developmental disorder of the entire visual system, including the extra-striate cortex. It manifests mainly by impaired visual acuity in the amblyopic eye. However, other abnormalities of visual function can be observed, such as decreased contrast sensitivity and stereoscopic vision, and some abnormalities can be found in the "good" eye. Amblyopia occurs during the critical period of brain development. It may be due to organic pathology of the visual pathways, visual deprivation or functional abnormalities, mainly anisometropia or strabismus. The diagnosis of amblyopia must be confirmed prior to treatment. Confirmation is based on cycloplegic refraction, visual acuity measurement and orthoptic assessment. However, screening for amblyopia and associated risk factors permits earlier diagnosis and treatment. The younger the child, the more effective the treatment, and it can only be achieved during the critical period. It requires parental cooperation in order to be effective and is based on occlusion or penalization of the healthy eye. The amblyopic eye may then develop better vision. Maintenance therapy must be performed until the end of the critical period to avoid recurrence. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
High-resolution eye tracking using V1 neuron activity
McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.
2014-01-01
Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783
Extralenticular and lenticular aspects of accommodation and presbyopia in human versus monkey eyes.
Croft, Mary Ann; McDonald, Jared P; Katz, Alexander; Lin, Ting-Li; Lütjen-Drecoll, Elke; Kaufman, Paul L
2013-07-26
To determine if the accommodative forward movements of the vitreous zonule and lens equator occur in the human eye, as they do in the rhesus monkey eye; to investigate the connection between the vitreous zonule posterior insertion zone and the posterior lens equator; and to determine which components-muscle apex width, lens thickness, lens equator position, vitreous zonule, circumlental space, and/or other intraocular dimensions, including those stated in the objectives above-are most important in predicting accommodative amplitude and presbyopia. Accommodation was induced pharmacologically in 12 visually normal human subjects (ages 19-65 years) and by midbrain electrical stimulation in 11 rhesus monkeys (ages 6-27 years). Ultrasound biomicroscopy imaged the entire ciliary body, anterior and posterior lens surfaces, and the zonule. Relevant distances were measured in the resting and accommodated eyes. Stepwise regression analysis determined which variables were the most important predictors. The human vitreous zonule and lens equator move forward (anteriorly) during accommodation, and their movements decline with age, as in the monkey. Over all ages studied, age could explain accommodative amplitude, but not as well as accommodative lens thickening and resting muscle apex thickness did together. Accommodative change in distances between the vitreous zonule insertion zone and the posterior lens equator or muscle apex were important for predicting accommodative lens thickening. Our findings quantify the movements of the zonule and ciliary muscle during accommodation, and identify their age-related changes that could impact the optical change that occurs during accommodation and IOL function.
Bouma, Wobbe; Jainandunsing, Jayant S; Khamooshian, Arash; van der Harst, Pim; Mariani, Massimo A; Natour, Ehsan
2017-02-01
A thorough understanding of mitral and aortic valve motion dynamics is essential in mastering the skills necessary for performing successful valve intervention (open or transcatheter repair or replacement). We describe a reproducible and versatile beating-heart mitral and aortic valve assessment and valve intervention training model in human cadavers. The model is constructed by bilateral ligation of the pulmonary veins, ligation of the supra-aortic arteries, creating a shunt between the descending thoracic aorta and the left atrial appendage with a vascular prosthesis, anastomizing a vascular prosthesis to the apex and positioning an intra-aortic balloon pump (IABP) in the vascular prosthesis, cross-clamping the descending thoracic aorta, and finally placing a fluid line in the shunt prosthesis. The left ventricle is filled with saline to the desired pressure through the fluid line, and the IABP is switched on and set to a desired frequency (usually 60-80 bpm). Prerepair valve dynamic motion can be studied under direct endoscopic visualization. After assessment, the IABP is switched off, and valve intervention training can be performed using standard techniques. This high-fidelity simulation model has known limitations, but provides a realistic environment with an actual beating (human) heart, which is of incremental value. The model provides a unique opportunity to fill a beating heart with saline and to study prerepair mitral and aortic valve dynamic motion under direct endoscopic visualization. The entire set-up provides a versatile beating-heart mitral and aortic valve assessment model, which may have important implications for future valve intervention training. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
The multisensory function of the human primary visual cortex.
Murray, Micah M; Thelen, Antonia; Thut, Gregor; Romei, Vincenzo; Martuzzi, Roberto; Matusz, Pawel J
2016-03-01
It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.
Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M
2012-08-01
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
NASA Astrophysics Data System (ADS)
Xue, Lixia; Dai, Yun; Rao, Xuejun; Wang, Cheng; Hu, Yiyun; Liu, Qian; Jiang, Wenhan
2008-01-01
Higher-order aberrations correction can improve visual performance of human eye to some extent. To evaluate how much visual benefit can be obtained with higher-order aberrations correction we developed an adaptive optics vision simulator (AOVS). Dynamic real time optimized modal compensation was used to implement various customized higher-order ocular aberrations correction strategies. The experimental results indicate that higher-order aberrations correction can improve visual performance of human eye comparing with only lower-order aberration correction but the improvement degree and higher-order aberration correction strategy are different from each individual. Some subjects can acquire great visual benefit when higher-order aberrations were corrected but some subjects acquire little visual benefit even though all higher-order aberrations were corrected. Therefore, relative to general lower-order aberrations correction strategy, customized higher-order aberrations correction strategy is needed to obtain optimal visual improvement for each individual. AOVS provides an effective tool for higher-order ocular aberrations optometry for customized ocular aberrations correction.
Effects of simulator motion and visual characteristics on rotorcraft handling qualities evaluations
NASA Technical Reports Server (NTRS)
Mitchell, David G.; Hart, Daniel C.
1993-01-01
The pilot's perceptions of aircraft handling qualities are influenced by a combination of the aircraft dynamics, the task, and the environment under which the evaluation is performed. When the evaluation is performed in a groundbased simulator, the characteristics of the simulation facility also come into play. Two studies were conducted on NASA Ames Research Center's Vertical Motion Simulator to determine the effects of simulator characteristics on perceived handling qualities. Most evaluations were conducted with a baseline set of rotorcraft dynamics, using a simple transfer-function model of an uncoupled helicopter, under different conditions of visual time delays and motion command washout filters. Differences in pilot opinion were found as the visual and motion parameters were changed, reflecting a change in the pilots' perceptions of handling qualities, rather than changes in the aircraft model itself. The results indicate a need for tailoring the motion washout dynamics to suit the task. Visual-delay data are inconclusive but suggest that it may be better to allow some time delay in the visual path to minimize the mismatch between visual and motion, rather than eliminate the visual delay entirely through lead compensation.
Metabolic Mapping of the Brain's Response to Visual Stimulation: Studies in Humans.
ERIC Educational Resources Information Center
Phelps, Michael E.; Kuhl, David E.
1981-01-01
Studies demonstrate increasing glucose metabolic rates in human primary (PVC) and association (AVC) visual cortex as complexity of visual scenes increase. AVC increased more rapidly with scene complexity than PVC and increased local metabolic activities above control subject with eyes closed; indicates wide range and metabolic reserve of visual…
Development of Flexible Visual Recognition Memory in Human Infants
ERIC Educational Resources Information Center
Robinson, Astri J.; Pascalis, Olivier
2004-01-01
Research using the visual paired comparison task has shown that visual recognition memory across changing contexts is dependent on the integrity of the hippocampal formation in human adults and in monkeys. The acquisition of contextual flexibility may contribute to the change in memory performance that occurs late in the first year of life. To…
Virtual Reality Educational Tool for Human Anatomy.
Izard, Santiago González; Juanes Méndez, Juan A; Palomera, Pablo Ruisoto
2017-05-01
Virtual Reality is becoming widespread in our society within very different areas, from industry to entertainment. It has many advantages in education as well, since it allows visualizing almost any object or going anywhere in a unique way. We will be focusing on medical education, and more specifically anatomy, where its use is especially interesting because it allows studying any structure of the human body by placing the user inside each one. By allowing virtual immersion in a body structure such as the interior of the cranium, stereoscopic vision goggles make these innovative teaching technologies a powerful tool for training in all areas of health sciences. The aim of this study is to illustrate the teaching potential of applying Virtual Reality in the field of human anatomy, where it can be used as a tool for education in medicine. A Virtual Reality Software was developed as an educational tool. This technological procedure is based entirely on software which will run in stereoscopic goggles to give users the sensation of being in a virtual environment, clearly showing the different bones and foramina which make up the cranium, and accompanied by audio explanations. Throughout the results the structure of the cranium is described in detailed from both inside and out. Importance of an exhaustive morphological knowledge of cranial fossae is further discussed. Application for the design of microsurgery is also commented.
Enhanced skin permeation of naltrexone by pulsed electromagnetic fields in human skin in vitro.
Krishnan, Gayathri; Edwards, Jeffrey; Chen, Yan; Benson, Heather A E
2010-06-01
The aim of the present study was to evaluate the skin permeation of naltrexone (NTX) under the influence of a pulsed electromagnetic field (PEMF). The permeation of NTX across human epidermis and a silicone membrane in vitro was monitored during and after application of the PEMF and compared to passive application. Enhancement ratios of NTX human epidermis permeation by PEMF over passive diffusion, calculated based on the AUC of cumulative NTX permeation to the receptor compartment verses time for 0-4 h, 4-8 h, and over the entire experiment (0-8 h) were 6.52, 5.25, and 5.66, respectively. Observation of the curve indicated an initial enhancement of NTX permeation compared to passive delivery whilst the PEMF was active (0-4 h). This was followed by a secondary phase after termination of PEMF energy (4-8 h) in which there was a steady increase in NTX permeation. No significant enhancement of NTX penetration across silicone membrane occurred with PEMF application in comparison to passively applied NTX. In a preliminary experiment PEMF enhanced the penetration of 10 nm gold nanoparticles through the stratum corneum as visualized by multiphoton microscopy. This suggests that the channels through which the nanoparticles move must be larger than the 10 nm diameter of these rigid particles. (c) 2009 Wiley-Liss, Inc. and the American Pharmacists Association
Bayesian learning of visual chunks by human observers
Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté
2008-01-01
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353
Mechanisms of migraine aura revealed by functional MRI in human visual cortex
Hadjikhani, Nouchine; Sanchez del Rio, Margarita; Wu, Ona; Schwartz, Denis; Bakker, Dick; Fischl, Bruce; Kwong, Kenneth K.; Cutrer, F. Michael; Rosen, Bruce R.; Tootell, Roger B. H.; Sorensen, A. Gregory; Moskowitz, Michael A.
2001-01-01
Cortical spreading depression (CSD) has been suggested to underlie migraine visual aura. However, it has been challenging to test this hypothesis in human cerebral cortex. Using high-field functional MRI with near-continuous recording during visual aura in three subjects, we observed blood oxygenation level-dependent (BOLD) signal changes that demonstrated at least eight characteristics of CSD, time-locked to percept/onset of the aura. Initially, a focal increase in BOLD signal (possibly reflecting vasodilation), developed within extrastriate cortex (area V3A). This BOLD change progressed contiguously and slowly (3.5 ± 1.1 mm/min) over occipital cortex, congruent with the retinotopy of the visual percept. Following the same retinotopic progression, the BOLD signal then diminished (possibly reflecting vasoconstriction after the initial vasodilation), as did the BOLD response to visual activation. During periods with no visual stimulation, but while the subject was experiencing scintillations, BOLD signal followed the retinotopic progression of the visual percept. These data strongly suggest that an electrophysiological event such as CSD generates the aura in human visual cortex. PMID:11287655
Thiessen, Amber; Brown, Jessica; Beukelman, David; Hux, Karen
2017-09-01
Photographs are a frequently employed tool for the rehabilitation of adults with traumatic brain injury (TBI). Speech-language pathologists (SLPs) working with these individuals must select photos that are easily identifiable and meaningful to their clients. In this investigation, we examined the visual attention response to camera- (i.e., depicted human figure looking toward camera) and task-engaged (i.e., depicted human figure looking at and touching an object) contextual photographs for a group of adults with TBI and a group of adults without neurological conditions. Eye-tracking technology served to accurately and objectively measure visual fixations. Although differences were hypothesized given the cognitive deficits associated with TBI, study results revealed little difference in the visual fixation patterns of adults with and without TBI. Specifically, both groups of participants tended to fixate rapidly on the depicted human figure and fixate more on objects in which a human figure was task-engaged than when a human figure was camera-engaged. These results indicate that strategic placement of human figures in a contextual photograph may modify the way in which individuals with TBI visually attend to and interpret photographs. In addition, task-engagement appears to have a guiding effect on visual attention that may be of benefit to SLPs hoping to select more effective contextual photographs for their clients with TBI. Finally, the limited differences in visual attention patterns between individuals with TBI and their age and gender matched peers without neurological impairments indicates that these two groups find similar photograph regions to be worthy of visual fixation. Readers will gain knowledge regarding the photograph selection process for individuals with TBI. In addition, readers will be able to identify camera- and task-engaged photographs and to explain why task-engagement may be a beneficial component of contextual photographs. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
Pilot Task Profiles, Human Factors, And Image Realism
NASA Astrophysics Data System (ADS)
McCormick, Dennis
1982-06-01
Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
Evaluation of Blalock-Taussig shunts in newborns: value of oblique MRI planes.
Kastler, B; Livolsi, A; Germain, P; Zöllner, G; Dietemann, J L
1991-01-01
Eight infants with systemic-pulmonary Blalock-Taussig shunts were evaluated by spin-echo ECG-gated MRI. Contrary to Echocardiography, MRI using coronal oblique projections successfully visualized all palliative shunts entirely in one single plane (including one carried out on a right aberrant subclavian artery). MRI allowed assessment of size, course and patency of the shunt, including pulmonary and subclavian insertion. The proximal portion of the pulmonary and subclavian arteries were also visualized. We conclude that MRI with axial scans completed by coronal oblique planes is a promising, non invasive method for imaging the anatomical features of Blalock-Taussig shunts.
Thermoacoustic imaging of fresh prostates up to 6-cm diameter
NASA Astrophysics Data System (ADS)
Patch, S. K.; Hanson, E.; Thomas, M.; Kelly, H.; Jacobsohn, K.; See, W. A.
2013-03-01
Thermoacoustic (TA) imaging provides a novel contrast mechanism that may enable visualization of cancerous lesions which are not robustly detected by current imaging modalities. Prostate cancer (PCa) is the most notorious example. Imaging entire prostate glands requires 6 cm depth penetration. We therefore excite TA signal using submicrosecond VHF pulses (100 MHz). We will present reconstructions of fresh prostates imaged in a well-controlled benchtop TA imaging system. Chilled glycine solution is used as acoustic couplant. The urethra is routinely visualized as signal dropout; surgical staples formed from 100-micron wide wire bent to 3 mm length generate strong positive signal.
Comparison of Object Recognition Behavior in Human and Monkey
Rajalingham, Rishi; Schmidt, Kailyn
2015-01-01
Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324
Evaluation of stereoscopic display with visual function and interview
NASA Astrophysics Data System (ADS)
Okuyama, Fumio
1999-05-01
The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.
Fern Graves; Thomas C. Baker; Aijun Zhang; Melody Keena; Kelli Hoover
2016-01-01
Anoplophora glabripennis has a complex suite of mate-finding behaviors, the functions of which are not entirely understood. These behaviors are elicited by a number of factors, including visual and chemical cues. Chemical cues include a maleproduced volatile semiochemical acting as a long-range sex pheromone, a femaleproduced cuticular hydrocarbon...
Describing Strategies Used by Elite, Intermediate, and Novice Ice Hockey Referees
ERIC Educational Resources Information Center
Hancock, David J.; Ste-Marie, Diane M.
2014-01-01
Much is known about sport officials' decisions (e.g., anticipation, visual search, and prior experience). Comprehension of the entire decision process, however, requires an ecologically valid examination. To address this, we implemented a 2-part study using an expertise paradigm with ice hockey referees. Purpose: Study 1 explored the…
Qualitative similarities in the visual short-term memory of pigeons and people.
Gibson, Brett; Wasserman, Edward; Luck, Steven J
2011-10-01
Visual short-term memory plays a key role in guiding behavior, and individual differences in visual short-term memory capacity are strongly predictive of higher cognitive abilities. To provide a broader evolutionary context for understanding this memory system, we directly compared the behavior of pigeons and humans on a change detection task. Although pigeons had a lower storage capacity and a higher lapse rate than humans, both species stored multiple items in short-term memory and conformed to the same basic performance model. Thus, despite their very different evolutionary histories and neural architectures, pigeons and humans have functionally similar visual short-term memory systems, suggesting that the functional properties of visual short-term memory are subject to similar selective pressures across these distant species.
Zhou, Mowei; Paša-Tolić, Ljiljana; Stenoien, David L
2017-02-03
As histones play central roles in most chromosomal functions including regulation of DNA replication, DNA damage repair, and gene transcription, both their basic biology and their roles in disease development have been the subject of intense study. Because multiple post-translational modifications (PTMs) along the entire protein sequence are potential regulators of histones, a top-down approach, where intact proteins are analyzed, is ultimately required for complete characterization of proteoforms. However, significant challenges remain for top-down histone analysis primarily because of deficiencies in separation/resolving power and effective identification algorithms. Here we used state-of-the-art mass spectrometry and a bioinformatics workflow for targeted data analysis and visualization. The workflow uses ProMex for intact mass deconvolution, MSPathFinder as a search engine, and LcMsSpectator as a data visualization tool. When complemented with the open-modification tool TopPIC, this workflow enabled identification of novel histone PTMs including tyrosine bromination on histone H4 and H2A, H3 glutathionylation, and mapping of conventional PTMs along the entire protein for many histone subunits.
Kawai, Nobuyuki; He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.
He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686
The Use Of Computational Human Performance Modeling As Task Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacuqes Hugo; David Gertman
2012-07-01
During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employedmore » to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.« less
2011-08-01
generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human
Bakken, Trygve E; Roddey, J Cooper; Djurovic, Srdjan; Akshoomoff, Natacha; Amaral, David G; Bloss, Cinnamon S; Casey, B J; Chang, Linda; Ernst, Thomas M; Gruen, Jeffrey R; Jernigan, Terry L; Kaufmann, Walter E; Kenet, Tal; Kennedy, David N; Kuperman, Joshua M; Murray, Sarah S; Sowell, Elizabeth R; Rimol, Lars M; Mattingsdal, Morten; Melle, Ingrid; Agartz, Ingrid; Andreassen, Ole A; Schork, Nicholas J; Dale, Anders M; Weiner, Michael; Aisen, Paul; Petersen, Ronald; Jack, Clifford R; Jagust, William; Trojanowki, John Q; Toga, Arthur W; Beckett, Laurel; Green, Robert C; Saykin, Andrew J; Morris, John; Liu, Enchi; Montine, Tom; Gamst, Anthony; Thomas, Ronald G; Donohue, Michael; Walter, Sarah; Gessert, Devon; Sather, Tamie; Harvey, Danielle; Kornak, John; Dale, Anders; Bernstein, Matthew; Felmlee, Joel; Fox, Nick; Thompson, Paul; Schuff, Norbert; Alexander, Gene; DeCarli, Charles; Bandy, Dan; Koeppe, Robert A; Foster, Norm; Reiman, Eric M; Chen, Kewei; Mathis, Chet; Cairns, Nigel J; Taylor-Reinwald, Lisa; Trojanowki, J Q; Shaw, Les; Lee, Virginia M Y; Korecka, Magdalena; Crawford, Karen; Neu, Scott; Foroud, Tatiana M; Potkin, Steven; Shen, Li; Kachaturian, Zaven; Frank, Richard; Snyder, Peter J; Molchan, Susan; Kaye, Jeffrey; Quinn, Joseph; Lind, Betty; Dolen, Sara; Schneider, Lon S; Pawluczyk, Sonia; Spann, Bryan M; Brewer, James; Vanderswag, Helen; Heidebrink, Judith L; Lord, Joanne L; Johnson, Kris; Doody, Rachelle S; Villanueva-Meyer, Javier; Chowdhury, Munir; Stern, Yaakov; Honig, Lawrence S; Bell, Karen L; Morris, John C; Ances, Beau; Carroll, Maria; Leon, Sue; Mintun, Mark A; Schneider, Stacy; Marson, Daniel; Griffith, Randall; Clark, David; Grossman, Hillel; Mitsis, Effie; Romirowsky, Aliza; deToledo-Morrell, Leyla; Shah, Raj C; Duara, Ranjan; Varon, Daniel; Roberts, Peggy; Albert, Marilyn; Onyike, Chiadi; Kielb, Stephanie; Rusinek, Henry; de Leon, Mony J; Glodzik, Lidia; De Santi, Susan; Doraiswamy, P Murali; Petrella, Jeffrey R; Coleman, R Edward; Arnold, Steven E; Karlawish, Jason H; Wolk, David; Smith, Charles D; Jicha, Greg; Hardy, Peter; Lopez, Oscar L; Oakley, MaryAnn; Simpson, Donna M; Porsteinsson, Anton P; Goldstein, Bonnie S; Martin, Kim; Makino, Kelly M; Ismail, M Saleem; Brand, Connie; Mulnard, Ruth A; Thai, Gaby; Mc-Adams-Ortiz, Catherine; Womack, Kyle; Mathews, Dana; Quiceno, Mary; Diaz-Arrastia, Ramon; King, Richard; Weiner, Myron; Martin-Cook, Kristen; DeVous, Michael; Levey, Allan I; Lah, James J; Cellar, Janet S; Burns, Jeffrey M; Anderson, Heather S; Swerdlow, Russell H; Apostolova, Liana; Lu, Po H; Bartzokis, George; Silverman, Daniel H S; Graff-Radford, Neill R; Parfitt, Francine; Johnson, Heather; Farlow, Martin R; Hake, Ann Marie; Matthews, Brandy R; Herring, Scott; van Dyck, Christopher H; Carson, Richard E; MacAvoy, Martha G; Chertkow, Howard; Bergman, Howard; Hosein, Chris; Black, Sandra; Stefanovic, Bojana; Caldwell, Curtis; Ging-Yuek; Hsiung, Robin; Feldman, Howard; Mudge, Benita; Assaly, Michele; Kertesz, Andrew; Rogers, John; Trost, Dick; Bernick, Charles; Munic, Donna; Kerwin, Diana; Mesulam, Marek-Marsel; Lipowski, Kristina; Wu, Chuang-Kuo; Johnson, Nancy; Sadowsky, Carl; Martinez, Walter; Villena, Teresa; Turner, Raymond Scott; Johnson, Kathleen; Reynolds, Brigid; Sperling, Reisa A; Johnson, Keith A; Marshall, Gad; Frey, Meghan; Yesavage, Jerome; Taylor, Joy L; Lane, Barton; Rosen, Allyson; Tinklenberg, Jared; Sabbagh, Marwan; Belden, Christine; Jacobson, Sandra; Kowall, Neil; Killiany, Ronald; Budson, Andrew E; Norbash, Alexander; Johnson, Patricia Lynn; Obisesan, Thomas O; Wolday, Saba; Bwayo, Salome K; Lerner, Alan; Hudson, Leon; Ogrocki, Paula; Fletcher, Evan; Carmichael, Owen; Olichney, John; Kittur, Smita; Borrie, Michael; Lee, T-Y; Bartha, Rob; Johnson, Sterling; Asthana, Sanjay; Carlsson, Cynthia M; Potkin, Steven G; Preda, Adrian; Nguyen, Dana; Tariot, Pierre; Fleisher, Adam; Reeder, Stephanie; Bates, Vernice; Capote, Horacio; Rainka, Michelle; Scharre, Douglas W; Kataki, Maria; Zimmerman, Earl A; Celmins, Dzintra; Brown, Alice D; Pearlson, Godfrey D; Blank, Karen; Anderson, Karen; Santulli, Robert B; Schwartz, Eben S; Sink, Kaycee M; Williamson, Jeff D; Garg, Pradeep; Watkins, Franklin; Ott, Brian R; Querfurth, Henry; Tremont, Geoffrey; Salloway, Stephen; Malloy, Paul; Correia, Stephen; Rosen, Howard J; Miller, Bruce L; Mintzer, Jacobo; Longmire, Crystal Flynn; Spicer, Kenneth; Finger, Elizabether; Rachinsky, Irina; Drost, Dick; Jernigan, Terry; McCabe, Connor; Grant, Ellen; Ernst, Thomas; Kuperman, Josh; Chung, Yoon; Murray, Sarah; Bloss, Cinnamon; Darst, Burcu; Pritchett, Lexi; Saito, Ashley; Amaral, David; DiNino, Mishaela; Eyngorina, Bella; Sowell, Elizabeth; Houston, Suzanne; Soderberg, Lindsay; Kaufmann, Walter; van Zijl, Peter; Rizzo-Busack, Hilda; Javid, Mohsin; Mehta, Natasha; Ruberry, Erika; Powers, Alisa; Rosen, Bruce; Gebhard, Nitzah; Manigan, Holly; Frazier, Jean; Kennedy, David; Yakutis, Lauren; Hill, Michael; Gruen, Jeffrey; Bosson-Heenan, Joan; Carlson, Heatherly
2012-03-06
Visual cortical surface area varies two- to threefold between human individuals, is highly heritable, and has been correlated with visual acuity and visual perception. However, it is still largely unknown what specific genetic and environmental factors contribute to normal variation in the area of visual cortex. To identify SNPs associated with the proportional surface area of visual cortex, we performed a genome-wide association study followed by replication in two independent cohorts. We identified one SNP (rs6116869) that replicated in both cohorts and had genome-wide significant association (P(combined) = 3.2 × 10(-8)). Furthermore, a metaanalysis of imputed SNPs in this genomic region identified a more significantly associated SNP (rs238295; P = 6.5 × 10(-9)) that was in strong linkage disequilibrium with rs6116869. These SNPs are located within 4 kb of the 5' UTR of GPCPD1, glycerophosphocholine phosphodiesterase GDE1 homolog (Saccharomyces cerevisiae), which in humans, is more highly expressed in occipital cortex compared with the remainder of cortex than 99.9% of genes genome-wide. Based on these findings, we conclude that this common genetic variation contributes to the proportional area of human visual cortex. We suggest that identifying genes that contribute to normal cortical architecture provides a first step to understanding genetic mechanisms that underlie visual perception.
Three-Dimensional Displays In The Future Flight Station
NASA Astrophysics Data System (ADS)
Bridges, Alan L.
1984-10-01
This review paper summarizes the development and applications of computer techniques for the representation of three-dimensional data in the future flight station. It covers the development of the Lockheed-NASA Advanced Concepts Flight Station (ACFS) research simulators. These simulators contain: A Pilot's Desk Flight Station (PDFS) with five 13- inch diagonal, color, cathode ray tubes on the main instrument panel; a computer-generated day and night visual system; a six-degree-of-freedom motion base; and a computer complex. This paper reviews current research, development, and evaluation of easily modifiable display systems and software requirements for three-dimensional displays that may be developed for the PDFS. This includes the analysis and development of a 3-D representation of the entire flight profile. This 3-D flight path, or "Highway-in-the-Sky", will utilize motion and perspective cues to tightly couple the human responses of the pilot to the aircraft control systems. The use of custom logic, e.g., graphics engines, may provide the processing power and architecture required for 3-D computer-generated imagery (CGI) or visual scene simulation (VSS). Diffraction or holographic head-up displays (HUDs) will also be integrated into the ACFS simulator to permit research on the requirements and use of these "out-the-window" projection systems. Future research may include the retrieval of high-resolution, perspective view terrain maps which could then be overlaid with current weather information or other selectable cultural features.
NASA Technical Reports Server (NTRS)
Hopkins, William D.; Washburn, David A.; Rumbaugh, Duane M.
1990-01-01
Visual forms were unilaterally presented using a video-task paradigm to ten humans, chimpanzees, and two rhesus monkeys to determine whether hemispheric advantages existed in the processing of these stimuli. Both accuracy and reaction time served as dependent measures. For the chimpanzees, a significant right hemisphere advantage was found within the first three test sessions. The humans and monkeys failed to show a hemispheric advantage as determined by accuracy scores. Analysis of reaction time data revealed a significant left hemisphere advantage for the monkeys. A visual half-field x block interaction was found for the chimpanzees, with a significant left visual field advantage in block two, whereas a right visual field advantage was found in block four. In the human subjects, a left visual field advantage was found in block three when they used their right hands to respond. The results are discussed in relation to recent reports of hemispheric advantages for nonhuman primates.
Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System
Ajina, Sara; Bridge, Holly
2017-01-01
Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337
Peng, Shuang; Bie, Binglin; Sun, Yangzesheng; Liu, Min; Cong, Hengjiang; Zhou, Wentao; Xia, Yucong; Tang, Heng; Deng, Hexiang; Zhou, Xiang
2018-04-03
Effective transfection of genetic molecules such as DNA usually relies on vectors that can reversibly uptake and release these molecules, and protect them from digestion by nuclease. Non-viral vectors meeting these requirements are rare due to the lack of specific interactions with DNA. Here, we design a series of four isoreticular metal-organic frameworks (Ni-IRMOF-74-II to -V) with progressively tuned pore size from 2.2 to 4.2 nm to precisely include single-stranded DNA (ssDNA, 11-53 nt), and to achieve reversible interaction between MOFs and ssDNA. The entire nucleic acid chain is completely confined inside the pores providing excellent protection, and the geometric distribution of the confined ssDNA is visualized by X-ray diffraction. Two MOFs in this series exhibit excellent transfection efficiency in mammalian immune cells, 92% in the primary mouse immune cells (CD4+ T cell) and 30% in human immune cells (THP-1 cell), unrivaled by the commercialized agents (Lipo and Neofect).
Wind turbine remote control using Android devices
NASA Astrophysics Data System (ADS)
Rat, C. L.; Panoiu, M.
2018-01-01
This paper describes the remote control of a wind turbine system over the internet using an Android device, namely a tablet or a smartphone. The wind turbine workstation contains a LabVIEW program which monitors the entire wind turbine energy conversion system (WECS). The Android device connects to the LabVIEW application, working as a remote interface to the wind turbine. The communication between the devices needs to be secured because it takes place over the internet. Hence, the data are encrypted before being sent through the network. The scope was the design of remote control software capable of visualizing real-time wind turbine data through a secure connection. Since the WECS is fully automated and no full-time human operator exists, unattended access to the turbine workstation is needed. Therefore the device must not require any confirmation or permission from the computer operator in order to control it. Another condition is that Android application does not have any root requirements.
Status of peatland degradation and development in Sumatra and Kalimantan.
Miettinen, Jukka; Liew, Soo Chin
2010-01-01
Peatlands cover around 13 Mha in Sumatra and Kalimantan, Indonesia. Human activities have rapidly increased in the peatland ecosystems during the last two decades, invariably degrading them and making them vulnerable to fires. This causes high carbon emissions that contribute to global climate change. For this article, we used 94 high resolution (10-20 m) satellite images to map the status of peatland degradation and development in Sumatra and Kalimantan using visual image interpretation. The results reveal that less than 4% of the peatland areas remain covered by pristine peatswamp forests (PSFs), while 37% are covered by PSFs with varying degree of degradation. Furthermore, over 20% is considered to be unmanaged degraded landscape, occupied by ferns, shrubs and secondary growth. This alarming extent of degradation makes peatlands vulnerable to accelerated peat decomposition and catastrophic fire episodes that will have global consequences. With on-going degradation and development the existence of the entire tropical peatland ecosystem in this region is in great danger.
Massive and Reproducible Production of Liver Buds Entirely from Human Pluripotent Stem Cells.
Takebe, Takanori; Sekine, Keisuke; Kimura, Masaki; Yoshizawa, Emi; Ayano, Satoru; Koido, Masaru; Funayama, Shizuka; Nakanishi, Noriko; Hisai, Tomoko; Kobayashi, Tatsuya; Kasai, Toshiharu; Kitada, Rina; Mori, Akira; Ayabe, Hiroaki; Ejiri, Yoko; Amimoto, Naoki; Yamazaki, Yosuke; Ogawa, Shimpei; Ishikawa, Momotaro; Kiyota, Yasujiro; Sato, Yasuhiko; Nozawa, Kohei; Okamoto, Satoshi; Ueno, Yasuharu; Taniguchi, Hideki
2017-12-05
Organoid technology provides a revolutionary paradigm toward therapy but has yet to be applied in humans, mainly because of reproducibility and scalability challenges. Here, we overcome these limitations by evolving a scalable organ bud production platform entirely from human induced pluripotent stem cells (iPSC). By conducting massive "reverse" screen experiments, we identified three progenitor populations that can effectively generate liver buds in a highly reproducible manner: hepatic endoderm, endothelium, and septum mesenchyme. Furthermore, we achieved human scalability by developing an omni-well-array culture platform for mass producing homogeneous and miniaturized liver buds on a clinically relevant large scale (>10 8 ). Vascularized and functional liver tissues generated entirely from iPSCs significantly improved subsequent hepatic functionalization potentiated by stage-matched developmental progenitor interactions, enabling functional rescue against acute liver failure via transplantation. Overall, our study provides a stringent manufacturing platform for multicellular organoid supply, thus facilitating clinical and pharmaceutical applications especially for the treatment of liver diseases through multi-industrial collaborations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Search of the Deep and Dark Web via DARPA Memex
NASA Astrophysics Data System (ADS)
Mattmann, C. A.
2015-12-01
Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and plotting techniques for exploring crawled information. The platform classifies, identify, and thwart predators, help to find victims and to identify buyers in human trafficking and will deliver technological superiority in search engines for DARPA. We are already transitioning the technologies into Geo and Planetary Science, and Bioinformatics.
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
Effects of local myopic defocus on refractive development in monkeys.
Smith, Earl L; Hung, Li-Fang; Huang, Juan; Arumugam, Baskar
2013-11-01
Visual signals that produce myopia are mediated by local, regionally selective mechanisms. However, little is known about spatial integration for signals that slow eye growth. The purpose of this study was to determine whether the effects of myopic defocus are integrated in a local manner in primates. Beginning at 24 ± 2 days of age, seven rhesus monkeys were reared with monocular spectacles that produced 3 diopters (D) of relative myopic defocus in the nasal visual field of the treated eye but allowed unrestricted vision in the temporal field (NF monkeys). Seven monkeys were reared with monocular +3 D lenses that produced relative myopic defocus across the entire field of view (FF monkeys). Comparison data from previous studies were available for 11 control monkeys, 8 monkeys that experienced 3 D of hyperopic defocus in the nasal field, and 6 monkeys exposed to 3 D of hyperopic defocus across the entire field. Refractive development, corneal power, and axial dimensions were assessed at 2- to 4-week intervals using retinoscopy, keratometry, and ultrasonography, respectively. Eye shape was assessed using magnetic resonance imaging. In response to full-field myopic defocus, the FF monkeys developed compensating hyperopic anisometropia, the degree of which was relatively constant across the horizontal meridian. In contrast, the NF monkeys exhibited compensating hyperopic changes in refractive error that were greatest in the nasal visual field. The changes in the pattern of peripheral refractions in the NF monkeys reflected interocular differences in vitreous chamber shape. As with form deprivation and hyperopic defocus, the effects of myopic defocus are mediated by mechanisms that integrate visual signals in a local, regionally selective manner in primates. These results are in agreement with the hypothesis that peripheral vision can influence eye shape and potentially central refractive error in a manner that is independent of central visual experience.
Bernardo, Antonio; Evins, Alexander I; Visca, Anna; Stieg, Phillip E
2013-06-01
The facial nerve has a short intracranial course but crosses critical and frequently accessed surgical structures during cranial base surgery. When performing approaches to complex intracranial regions, it is essential to understand the nerve's conventional and topographic anatomy from different surgical perspectives as well as its relationship with surrounding structures. To describe the entire intracranial course of the facial nerve as observed via different neurosurgical approaches and to provide an analytical evaluation of the degree of nerve exposure achieved with each approach. Anterior petrosectomies (middle fossa, extended middle fossa), posterior petrosectomies (translabyrinthine, retrolabyrinthine, transcochlear), a retrosigmoid, a far lateral, and anterior transfacial (extended maxillectomy, mandibular swing) approaches were performed on 10 adult cadaveric heads (20 sides). The degree of facial nerve exposure achieved per segment for each approach was assessed and graded independently by 3 surgeons. The anterior petrosal approaches offered good visualization of the nerve in the cerebellopontine angle and intracanalicular portion superiorly, whereas the posterior petrosectomies provided more direct visualization without the need for cerebellar retraction. The far lateral approach exposed part of the posterior and the entire inferior quadrants, whereas the retrosigmoid approach exposed parts of the superior and inferior quadrants and the entire posterior quadrant. Anterior and anteroinferior exposure of the facial nerve was achieved via the transfacial approaches. The surgical route used must rely on the size, nature, and general location of the lesion, as well as on the capability of the particular approach to better expose the appropriate segment of the facial nerve.
The UCSC genome browser and associated tools
Haussler, David; Kent, W. James
2013-01-01
The UCSC Genome Browser (http://genome.ucsc.edu) is a graphical viewer for genomic data now in its 13th year. Since the early days of the Human Genome Project, it has presented an integrated view of genomic data of many kinds. Now home to assemblies for 58 organisms, the Browser presents visualization of annotations mapped to genomic coordinates. The ability to juxtapose annotations of many types facilitates inquiry-driven data mining. Gene predictions, mRNA alignments, epigenomic data from the ENCODE project, conservation scores from vertebrate whole-genome alignments and variation data may be viewed at any scale from a single base to an entire chromosome. The Browser also includes many other widely used tools, including BLAT, which is useful for alignments from high-throughput sequencing experiments. Private data uploaded as Custom Tracks and Data Hubs in many formats may be displayed alongside the rich compendium of precomputed data in the UCSC database. The Table Browser is a full-featured graphical interface, which allows querying, filtering and intersection of data tables. The Saved Session feature allows users to store and share customized views, enhancing the utility of the system for organizing multiple trains of thought. Binary Alignment/Map (BAM), Variant Call Format and the Personal Genome Single Nucleotide Polymorphisms (SNPs) data formats are useful for visualizing a large sequencing experiment (whole-genome or whole-exome), where the differences between the data set and the reference assembly may be displayed graphically. Support for high-throughput sequencing extends to compact, indexed data formats, such as BAM, bigBed and bigWig, allowing rapid visualization of large datasets from RNA-seq and ChIP-seq experiments via local hosting. PMID:22908213
The UCSC genome browser and associated tools.
Kuhn, Robert M; Haussler, David; Kent, W James
2013-03-01
The UCSC Genome Browser (http://genome.ucsc.edu) is a graphical viewer for genomic data now in its 13th year. Since the early days of the Human Genome Project, it has presented an integrated view of genomic data of many kinds. Now home to assemblies for 58 organisms, the Browser presents visualization of annotations mapped to genomic coordinates. The ability to juxtapose annotations of many types facilitates inquiry-driven data mining. Gene predictions, mRNA alignments, epigenomic data from the ENCODE project, conservation scores from vertebrate whole-genome alignments and variation data may be viewed at any scale from a single base to an entire chromosome. The Browser also includes many other widely used tools, including BLAT, which is useful for alignments from high-throughput sequencing experiments. Private data uploaded as Custom Tracks and Data Hubs in many formats may be displayed alongside the rich compendium of precomputed data in the UCSC database. The Table Browser is a full-featured graphical interface, which allows querying, filtering and intersection of data tables. The Saved Session feature allows users to store and share customized views, enhancing the utility of the system for organizing multiple trains of thought. Binary Alignment/Map (BAM), Variant Call Format and the Personal Genome Single Nucleotide Polymorphisms (SNPs) data formats are useful for visualizing a large sequencing experiment (whole-genome or whole-exome), where the differences between the data set and the reference assembly may be displayed graphically. Support for high-throughput sequencing extends to compact, indexed data formats, such as BAM, bigBed and bigWig, allowing rapid visualization of large datasets from RNA-seq and ChIP-seq experiments via local hosting.
Audio-visual affective expression recognition
NASA Astrophysics Data System (ADS)
Huang, Thomas S.; Zeng, Zhihong
2007-11-01
Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.
Human microbiome visualization using 3D technology.
Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C
2011-01-01
High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Morlang, G.M.
1996-06-01
The use of neutron radiography for visualization of fluid flow through flow visualization modules has been very successful. Current experiments at the Penn State Breazeale Reactor serve to verify the mixing and transport of soluble boron under natural flow conditions as would be experienced in a pressurized water reactor. Different flow geometries have been modeled including holes, slots, and baffles. Flow modules are constructed of aluminum box material 1 1/2 inches by 4 inches in varying lengths. An experimental flow system was built which pumps fluid to a head tank and natural circulation flow occurs from the head tank throughmore » the flow visualization module to be radiographed. The entire flow system is mounted on a portable assembly to allow placement of the flow visualization module in front of the neutron beam port. A neutron-transparent fluorinert fluid is used to simulate water at different densities. Boron is modeled by gadolinium oxide powder as a tracer element, which is placed in a mixing assembly and injected into the system by remote operated electric valve, once the reactor is at power. The entire sequence is recorded on real-time video. Still photographs are made frame-by-frame from the video tape. Computers are used to digitally enhance the video and still photographs. The data obtained from the enhancement will be used for verification of simple geometry predictions using the TRAC and RELAP thermal-hydraulic codes. A detailed model of a reactor vessel inlet plenum, downcomer region, flow distribution area and core inlet is being constructed to model the AP600 plenum. Successive radiography experiments of each section of the model under identical conditions will provide a complete vessel/core model for comparison with the thermal-hydraulic codes.« less
Pigeon visual short-term memory directly compared to primates.
Wright, Anthony A; Elmore, L Caitlin
2016-02-01
Three pigeons were trained to remember arrays of 2-6 colored squares and detect which of two squares had changed color to test their visual short-term memory. Procedures (e.g., stimuli, displays, viewing times, delays) were similar to those used to test monkeys and humans. Following extensive training, pigeons performed slightly better than similarly trained monkeys, but both animal species were considerably less accurate than humans with the same array sizes (2, 4 and 6 items). Pigeons and monkeys showed calculated memory capacities of one item or less, whereas humans showed a memory capacity of 2.5 items. Despite the differences in calculated memory capacities, the pigeons' memory results, like those from monkeys and humans, were all well characterized by an inverse power-law function fit to d' values for the five display sizes. This characterization provides a simple, straightforward summary of the fundamental processing of visual short-term memory (how visual short-term memory declines with memory load) that emphasizes species similarities based upon similar functional relationships. By closely matching pigeon testing parameters to those of monkeys and humans, these similar functional relationships suggest similar underlying processes of visual short-term memory in pigeons, monkeys and humans. Copyright © 2015 Elsevier B.V. All rights reserved.
Single unit approaches to human vision and memory.
Kreiman, Gabriel
2007-08-01
Research on the visual system focuses on using electrophysiology, pharmacology and other invasive tools in animal models. Non-invasive tools such as scalp electroencephalography and imaging allow examining humans but show a much lower spatial and/or temporal resolution. Under special clinical conditions, it is possible to monitor single-unit activity in humans when invasive procedures are required due to particular pathological conditions including epilepsy and Parkinson's disease. We review our knowledge about the visual system and visual memories in the human brain at the single neuron level. The properties of the human brain seem to be broadly compatible with the knowledge derived from animal models. The possibility of examining high-resolution brain activity in conscious human subjects allows investigators to ask novel questions that are challenging to address in animal models.
NASA GIBS & Worldview - Lesson Ready Visualizations
NASA Astrophysics Data System (ADS)
Cechini, M. F.; Boller, R. A.; Baynes, K.; Gunnoe, T.; Wong, M. M.; Schmaltz, J. E.; De Luca, A. P.; King, J.; Roberts, J. T.; Rodriguez, J.; Thompson, C. K.; Alarcon, C.; De Cesare, C.; Pressley, N. N.
2016-12-01
For more than 20 years, the NASA Earth Observing System (EOS) has operated dozens of remote sensing satellites collecting 14 Petabytes of data that span thousands of science parameters. Within these observations are keys the Earth Scientists have used to unlock many things that we understand about our planet. Also contained within these observations are a myriad of opportunities for learning and education. The trick is making them accessible to educators and students in convenient and simple ways so that effort can be spent on lesson enrichment and not overcoming technical hurdles. The NASA Global Imagery Browse Services (GIBS) system and NASA Worldview website provide a unique view into EOS data through daily full resolution visualizations of hundreds of earth science parameters. For many of these parameters, visualizations are available within hours of acquisition from the satellite. For others, visualizations are available for the entire mission of the satellite. Accompanying the visualizations are visual aids such as color legends, place names, and orbit tracks. By using these visualizations, educators and students can observe natural phenomena that enrich a scientific education. This presentation will provide an overview of the visualizations available in NASA GIBS and Worldview and how they are accessed. We will also provide real-world examples of how the visualizations have been used in educational settings including planetariums, visitor centers, hack-a-thons, and public organizations.
Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier
2013-10-28
The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.
Learning to Recognize Patterns: Changes in the Visual Field with Familiarity
NASA Astrophysics Data System (ADS)
Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo
1995-01-01
Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].
Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin
2017-07-05
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
What makes a visualization memorable?
Borkin, Michelle A; Vo, Azalea A; Bylinskii, Zoya; Isola, Phillip; Sunkavalli, Shashank; Oliva, Aude; Pfister, Hanspeter
2013-12-01
An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.
Contini, Erika W; Wardle, Susan G; Carlson, Thomas A
2017-10-01
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.
Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A
2014-08-01
The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Similarities in human visual and declared measures of preference for opposite-sex faces.
Griffey, Jack A F; Little, Anthony C
2014-01-01
Facial appearance in humans is associated with attraction and mate choice. Numerous studies have identified that adults display directional preferences for certain facial traits including symmetry, averageness, and sexually dimorphic traits. Typically, studies measuring human preference for these traits examine declared (e.g., choice or ratings of attractiveness) or visual preferences (e.g., looking time) of participants. However, the extent to which visual and declared preferences correspond remains relatively untested. In order to evaluate the relationship between these measures we examined visual and declared preferences displayed by men and women for opposite-sex faces manipulated across three dimensions (symmetry, averageness, and masculinity) and compared preferences from each method. Results indicated that participants displayed significant visual and declared preferences for symmetrical, average, and appropriately sexually dimorphic faces. We also found that declared and visual preferences correlated weakly but significantly. These data indicate that visual and declared preferences for manipulated facial stimuli produce similar directional preferences across participants and are also correlated with one another within participants. Both methods therefore may be considered appropriate to measure human preferences. However, while both methods appear likely to generate similar patterns of preference at the sample level, the weak nature of the correlation between visual and declared preferences in our data suggests some caution in assuming visual preferences are the same as declared preferences at the individual level. Because there are positive and negative factors in both methods for measuring preference, we suggest that a combined approach is most useful in outlining population level preferences for traits.
Limanowski, Jakub; Blankenburg, Felix
2016-03-02
The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual information. These brain areas thus likely integrate visual and proprioceptive information into a flexible multisensory body representation. Copyright © 2016 the authors 0270-6474/16/362582-08$15.00/0.
Human image tracking technique applied to remote collaborative environments
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Suzuki, Gen
1993-10-01
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
Visual Aids and Multimedia in Second Language Acquisition
ERIC Educational Resources Information Center
Halwani, Noha
2017-01-01
Education involves more than simply passing the final test. Rather, it is the process of educating an entire generation. This research project focused on language learners of English as a Second Language. This action research was conducted in an ESL classroom in H. Frank Carey High School, one of five high schools in the Sewanhaka Central District…
Sketching in Design Journals: An Analysis of Visual Representations in the Product Design Process
ERIC Educational Resources Information Center
Lau, Kimberly; Oehlberg, Lora; Agogino, Alice
2009-01-01
This paper explores the sketching behavior of designers and the role of sketching in the design process. Observations from a descriptive study of sketches provided in design journals, characterized by a protocol measuring sketching activities, are presented. A distinction is made between journals that are entirely tangible and those that contain…
A La Carts: You Want Wireless Mobility? Have a COW
ERIC Educational Resources Information Center
Villano, Matt
2006-01-01
Computers on wheels, or COWs, combine the wireless technology of today with the audio/visual carts of yesteryear for an entirely new spin on mobility. Increasingly used by districts with laptop computing initiatives, COWs are among the hottest high-tech sellers in schools today, according to market research firm Quality Education Data. In this…
The Canonical Alfred Hitchcock
ERIC Educational Resources Information Center
Lewis, Michael J.
2010-01-01
Alfred Hitchcock is a major figure of popular culture. He was one of the founding fathers of the cinematic art and, together with Eisenstein and Murnau, helped define its visual language. So fruitful was he that a single film could spawn an entire genre, as "Psycho" helped create the modern horror film and "North by Northwest" the style and tone…
The Two Worlds of School: Differences in the Photographs of Black and White Adolescents.
ERIC Educational Resources Information Center
Damico, Sandra Bowman
This paper presents a study conducted to document adolescents' visual perceptions of school. Specifically, an attempt was made to determine whether black and white adolescents, when given cameras, an entire school day, and complete freedom from class assignments, would select different physical and social aspects of their school environment to…
Business Documents Don't Have to Be Boring
ERIC Educational Resources Information Center
Schultz, Benjamin
2006-01-01
With business documents, visuals can serve to enhance the written word in conveying the message. Images can be especially effective when used subtly, on part of the page, on successive pages to provide continuity, or even set as watermarks over the entire page. A main reason given for traditional text-only business documents is that they are…
We All Belong! The Tecumseh Mural Project
ERIC Educational Resources Information Center
Lukawecky, Kristine
2009-01-01
The author of this article describes how as the visual-arts teacher of Tecumseh Public School, she brought the entire school and community together by creating a mural that promoted belonging. The mural involved tapping into the the creativity of all 400 students from kindergarten through eighth grade at Tecumseh while creating a work of art with…
The µ-opioid system promotes visual attention to faces and eyes.
Chelnokova, Olga; Laeng, Bruno; Løseth, Guro; Eikemo, Marie; Willoch, Frode; Leknes, Siri
2016-12-01
Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Grohar: Automated Visualization of Genome-Scale Metabolic Models and Their Pathways.
Moškon, Miha; Zimic, Nikolaj; Mraz, Miha
2018-05-01
Genome-scale metabolic models (GEMs) have become a powerful tool for the investigation of the entire metabolism of the organism in silico. These models are, however, often extremely hard to reconstruct and also difficult to apply to the selected problem. Visualization of the GEM allows us to easier comprehend the model, to perform its graphical analysis, to find and correct the faulty relations, to identify the parts of the system with a designated function, etc. Even though several approaches for the automatic visualization of GEMs have been proposed, metabolic maps are still manually drawn or at least require large amount of manual curation. We present Grohar, a computational tool for automatic identification and visualization of GEM (sub)networks and their metabolic fluxes. These (sub)networks can be specified directly by listing the metabolites of interest or indirectly by providing reference metabolic pathways from different sources, such as KEGG, SBML, or Matlab file. These pathways are identified within the GEM using three different pathway alignment algorithms. Grohar also supports the visualization of the model adjustments (e.g., activation or inhibition of metabolic reactions) after perturbations are induced.
Shibai, Atsushi; Arimoto, Tsunehiro; Yoshinaga, Tsukasa; Tsuchizawa, Yuta; Khureltulga, Dashdavaa; Brown, Zuben P; Kakizuka, Taishi; Hosoda, Kazufumi
2018-06-05
Visual recognition of conspecifics is necessary for a wide range of social behaviours in many animals. Medaka (Japanese rice fish), a commonly used model organism, are known to be attracted by the biological motion of conspecifics. However, biological motion is a composite of both body-shape motion and entire-field motion trajectory (i.e., posture or motion-trajectory elements, respectively), and it has not been revealed which element mediates the attractiveness. Here, we show that either posture or motion-trajectory elements alone can attract medaka. We decomposed biological motion of the medaka into the two elements and synthesized visual stimuli that contain both, either, or none of the two elements. We found that medaka were attracted by visual stimuli that contain at least one of the two elements. In the context of other known static visual information regarding the medaka, the potential multiplicity of information regarding conspecific recognition has further accumulated. Our strategy of decomposing biological motion into these partial elements is applicable to other animals, and further studies using this technique will enhance the basic understanding of visual recognition of conspecifics.
AWE: Aviation Weather Data Visualization Environment
NASA Technical Reports Server (NTRS)
Spirkovska, Lilly; Lodha, Suresh K.; Norvig, Peter (Technical Monitor)
2000-01-01
Weather is one of the major causes of aviation accidents. General aviation (GA) flights account for 92% of all the aviation accidents, In spite of all the official and unofficial sources of weather visualization tools available to pilots, there is an urgent need for visualizing several weather related data tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment AWE), presents graphical displays of meteorological observations, terminal area forecasts, and winds aloft forecasts onto a cartographic grid specific to the pilot's area of interest. Decisions regarding the graphical display and design are made based on careful consideration of user needs. Integral visual display of these elements of weather reports is designed for the use of GA pilots as a weather briefing and route selection tool. AWE provides linking of the weather information to the flight's path and schedule. The pilot can interact with the system to obtain aviation-specific weather for the entire area or for his specific route to explore what-if scenarios and make "go/no-go" decisions. The system, as evaluated by some pilots at NASA Ames Research Center, was found to be useful.
Understand your Algorithm: Drill Down to Sample Visualizations in Jupyter Notebooks
NASA Astrophysics Data System (ADS)
Mapes, B. E.; Ho, Y.; Cheedela, S. K.; McWhirter, J.
2017-12-01
Statistics are the currency of climate dynamics, but the space of all possible algorithms is fathomless - especially for 4-dimensional weather-resolving data that many "impact" variables depend on. Algorithms are designed on data samples, but how do you know if they measure what you expect when turned loose on Big Data? We will introduce the year-1 prototype of a 3-year scientist-led, NSF-supported, Unidata-quality software stack called DRILSDOWN (https://brianmapes.github.io/EarthCube-DRILSDOWN/) for automatically extracting, integrating, and visualizing multivariate 4D data samples. Based on a customizable "IDV bundle" of data sources, fields and displays supplied by the user, the system will teleport its space-time coordinates to fetch Cases of Interest (edge cases, typical cases, etc.) from large aggregated repositories. These standard displays can serve as backdrops to overlay with your value-added fields (such as derived quantities stored on a user's local disk). Fields can be readily pulled out of the visualization object for further processing in Python. The hope is that algorithms successfully tested in this visualization space will then be lifted out and added to automatic processing toolchains, lending confidence in the next round of processing, to seek the next Cases of Interest, in light of a user's statistical measures of "Interest". To log the scientific work done in this vein, the visualizations are wrapped in iPython-based Jupyter notebooks for rich, human-readable documentation (indeed, quasi-publication with formatted text, LaTex math, etc.). Such notebooks are readable and executable, with digital replicability and provenance built in. The entire digital object of a case study can be stored in a repository, where libraries of these Case Study Notebooks can be examined in a browser. Model data (the session topic) are of course especially convenient for this system, but observations of all sorts can also be brought in, overlain, and differenced or otherwise co-processed. The system is available in various tiers, from minimal-install GUI visualizations only, to GUI+Notebook system, to the full system with the repository software. We seek interested users, initially in a "beta tester" mode with the goodwill to offer reports and requests to help drive improvements in project years 2 and 3.
Some comments on particle image displacement velocimetry
NASA Technical Reports Server (NTRS)
Lourenco, L. M.
1988-01-01
Laser speckle velocimetry (LSV) or particle image displacement velocimetry, is introduced. This technique provides the simultaneous visualization of the two-dimensional streamline pattern in unsteady flows as well as the quantification of the velocity field over an entire plane. The advantage of this technique is that the velocity field can be measured over an entire plane of the flow field simultaneously, with accuracy and spatial resolution. From this the instantaneous vorticity field can be easily obtained. This constitutes a great asset for the study of a variety of flows that evolve stochastically in both space and time. The basic concept of LSV; methods of data acquisition and reduction, examples of its use, and parameters that affect its utilization are described.
Visual performance modeling in the human operator simulator
NASA Technical Reports Server (NTRS)
Strieb, M. I.
1979-01-01
A brief description of the history of the development of the human operator simulator (HOS) model is presented. Features of the HOS micromodels that impact on the obtainment of visual performance data are discussed along with preliminary details on a HOS pilot model designed to predict the results of visual performance workload data obtained through oculometer studies on pilots in real and simulated approaches and landings.
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
Influence of Immersive Human Scale Architectural Representation on Design Judgment
NASA Astrophysics Data System (ADS)
Elder, Rebecca L.
Unrealistic visual representation of architecture within our existing environments have lost all reference to the human senses. As a design tool, visual and auditory stimuli can be utilized to determine human's perception of design. This experiment renders varying building inputs within different sites, simulated with corresponding immersive visual and audio sensory cues. Introducing audio has been proven to influence the way a person perceives a space, yet most inhabitants rely strictly on their sense of vision to make design judgments. Though not as apparent, users prefer spaces that have a better quality of sound and comfort. Through a series of questions, we can begin to analyze whether a design is fit for both an acoustic and visual environment.
Gilbert, T L; Bennett, T A; Maestas, D C; Cimino, D F; Prossnitz, E R
2001-03-27
After stimulation by ligand, most G protein-coupled receptors (GPCRs) undergo rapid phosphorylation, followed by desensitization and internalization. In the case of the N-formyl peptide receptor (FPR), these latter two processing steps have been shown to be entirely dependent on phosphorylation of the receptor's carboxy terminus. We have previously demonstrated that FPR internalization can occur in the absence of receptor desensitization, indicating that FPR desensitization and internalization are regulated differentially. In this study, we have investigated whether human chemoattractant receptors internalize via clathrin-coated pits. Internalization of the FPR transiently expressed in HEK 293 cells was shown to be dependent upon receptor phosphorylation. Despite this, internalization of the FPR, as well as the C5a receptor, was demonstrated to be independent of the actions of arrestin, dynamin, and clathrin. In addition, we utilized fluorescence microscopy to visualize the FPR and beta(2)-adrenergic receptor as they internalized in the same cell, revealing distinct sites of internalization. Last, we found that a nonphosphorylatable mutant of the FPR, unable to internalize, was competent to activate p44/42 MAP kinase. Together, these results demonstrate not only that the FPR internalizes via an arrestin-, dynamin-, and clathrin-independent pathway but also that signal transduction to MAP kinases occurs in an internalization-independent manner.
Motion analysis for duplicate frame removal in wireless capsule endoscope
NASA Astrophysics Data System (ADS)
Lee, Hyun-Gyu; Choi, Min-Kook; Lee, Sang-Chul
2011-03-01
Wireless capsule endoscopy (WCE) has been intensively researched recently due to its convenience for diagnosis and extended detection coverage of some diseases. Typically, a full recording covering entire human digestive system requires about 8 to 12 hours for a patient carrying a capsule endoscope and a portable image receiver/recorder unit, which produces 120,000 image frames on average. In spite of the benefits of close examination, WCE based test has a barrier for quick diagnosis such that a trained diagnostician must examine a huge amount of images for close investigation, normally over 2 hours. The main purpose of our work is to present a novel machine vision approach to reduce diagnosis time by automatically detecting duplicated recordings caused by backward camera movement, typically containing redundant information, in small intestine. The developed technique could be integrated with a visualization tool which supports intelligent inspection method, such as automatic play speed control. Our experimental result shows high accuracy of the technique by detecting 989 duplicate image frames out of 10,000, equivalently to 9.9% data reduction, in a WCE video from a real human subject. With some selected parameters, we achieved the correct detection ratio of 92.85% and the false detection ratio of 13.57%.
Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator
NASA Astrophysics Data System (ADS)
Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi
Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C.; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-01-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct – depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences. PMID:27230785
The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation
Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark
2012-01-01
Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053
Does visual attention drive the dynamics of bistable perception?
Dieter, Kevin C; Brascamp, Jan; Tadin, Duje; Blake, Randolph
2016-10-01
How does attention interact with incoming sensory information to determine what we perceive? One domain in which this question has received serious consideration is that of bistable perception: a captivating class of phenomena that involves fluctuating visual experience in the face of physically unchanging sensory input. Here, some investigations have yielded support for the idea that attention alone determines what is seen, while others have implicated entirely attention-independent processes in driving alternations during bistable perception. We review the body of literature addressing this divide and conclude that in fact both sides are correct-depending on the form of bistable perception being considered. Converging evidence suggests that visual attention is required for alternations in the type of bistable perception called binocular rivalry, while alternations during other types of bistable perception appear to continue without requiring attention. We discuss some implications of this differential effect of attention for our understanding of the mechanisms underlying bistable perception, and examine how these mechanisms operate during our everyday visual experiences.
Mapping the Primate Visual System with [2-14C]Deoxyglucose
NASA Astrophysics Data System (ADS)
Macko, Kathleen A.; Jarvis, Charlene D.; Kennedy, Charles; Miyaoka, Mikoto; Shinohara, Mami; Sokoloff, Louis; Mishkin, Mortimer
1982-10-01
The [2-14C]deoxyglucose method was used to identify the cerebral areas related to vision in the rhesus monkey (Macaca mulatta). This was achieved by comparing glucose utilization in a visually stimulated with that in a visually deafferented hemisphere. The cortical areas related to vision included the entire expanse of striate, prestriate, and inferior temporal cortex as far forward as the temporal pole, the posterior part of the inferior parietal lobule, and the prearcuate and inferior prefrontal cortex. Subcortically, in addition to the dorsal lateral geniculate nucleus and superficial layers of the superior colliculus, the structures related to vision included large parts of the pulvinar, caudate, putamen, claustrum, and amygdala. These results, which are consonant with a model of visual function that postulates an occipito-temporo-prefrontal pathway for object vision and an occipito-parieto-prefrontal pathway for spatial vision, reveal the full extent of those pathways and identify their points of contact with limbic, striatal, and diencephalic structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta
Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less
Cognitive processes facilitated by contextual cueing: evidence from event-related brain potentials.
Schankin, Andrea; Schubö, Anna
2009-05-01
Finding a target in repeated search displays is faster than finding the same target in novel ones (contextual cueing). It is assumed that the visual context (the arrangement of the distracting objects) is used to guide attention efficiently to the target location. Alternatively, other factors, e.g., facilitation in early visual processing or in response selection, may play a role as well. In a contextual cueing experiment, participant's electrophysiological brain activity was recorded. Participants identified the target faster and more accurately in repeatedly presented displays. In this condition, the N2pc, a component reflecting the allocation of visual-spatial attention, was enhanced, indicating that attention was allocated more efficiently to those targets. However, also response-related processes, reflected by the LRP, were facilitated, indicating that guidance of attention cannot account for the entire contextual cueing benefit.
Extralenticular and Lenticular Aspects of Accommodation and Presbyopia in Human Versus Monkey Eyes
Croft, Mary Ann; McDonald, Jared P.; Katz, Alexander; Lin, Ting-Li; Lütjen-Drecoll, Elke; Kaufman, Paul L.
2013-01-01
Purpose. To determine if the accommodative forward movements of the vitreous zonule and lens equator occur in the human eye, as they do in the rhesus monkey eye; to investigate the connection between the vitreous zonule posterior insertion zone and the posterior lens equator; and to determine which components—muscle apex width, lens thickness, lens equator position, vitreous zonule, circumlental space, and/or other intraocular dimensions, including those stated in the objectives above—are most important in predicting accommodative amplitude and presbyopia. Methods. Accommodation was induced pharmacologically in 12 visually normal human subjects (ages 19–65 years) and by midbrain electrical stimulation in 11 rhesus monkeys (ages 6–27 years). Ultrasound biomicroscopy imaged the entire ciliary body, anterior and posterior lens surfaces, and the zonule. Relevant distances were measured in the resting and accommodated eyes. Stepwise regression analysis determined which variables were the most important predictors. Results. The human vitreous zonule and lens equator move forward (anteriorly) during accommodation, and their movements decline with age, as in the monkey. Over all ages studied, age could explain accommodative amplitude, but not as well as accommodative lens thickening and resting muscle apex thickness did together. Accommodative change in distances between the vitreous zonule insertion zone and the posterior lens equator or muscle apex were important for predicting accommodative lens thickening. Conclusions. Our findings quantify the movements of the zonule and ciliary muscle during accommodation, and identify their age-related changes that could impact the optical change that occurs during accommodation and IOL function. PMID:23745002
Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex
Jeong, Su Keun
2016-01-01
The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642
NASA Astrophysics Data System (ADS)
Masterson, Timothy A.; Dill, Allison L.; Eberlin, Livia S.; Mattarozzi, Monica; Cheng, Liang; Beck, Stephen D. W.; Bianchi, Federica; Cooks, R. Graham
2011-08-01
Desorption electrospray ionization mass spectrometry (DESI-MS) has been successfully used to discriminate between normal and cancerous human tissue from different anatomical sites. On the basis of this, DESI-MS imaging was used to characterize human seminoma and adjacent normal tissue. Seminoma and adjacent normal paired human tissue sections (40 tissues) from 15 patients undergoing radical orchiectomy were flash frozen in liquid nitrogen and sectioned to 15 μm thickness and thaw mounted to glass slides. The entire sample was two-dimensionally analyzed by the charged solvent spray to form a molecular image of the biological tissue. DESI-MS images were compared with formalin-fixed, hematoxylin and eosin (H&E) stained slides of the same material. Increased signal intensity was detected for two seminolipids [seminolipid (16:0/16:0) and seminolipid (30:0)] in the normal tubule testis tissue; these compounds were undetectable in seminoma tissue, as well as from the surrounding fat, muscle, and blood vessels. A glycerophosphoinositol [PI(18:0/20:4)] was also found at increased intensity in the normal testes tubule tissue when compared with seminoma tissue. Ascorbic acid (i.e., vitamin C) was found at increased amounts in seminoma tissue when compared with normal tissue. DESI-MS analysis was successfully used to visualize the location of several types of molecules across human seminoma and normal tissues. Discrimination between seminoma and adjacent normal testes tubules was achieved on the basis of the spatial distributions and varying intensities of particular lipid species as well as ascorbic acid. The increased presence of ascorbic acid within seminoma compared with normal seminiferous tubules was previously unknown.
ERIC Educational Resources Information Center
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception…
Visual Requirements for Human Drivers and Autonomous Vehicles
DOT National Transportation Integrated Search
2016-03-01
Identification of published literature between 1995 and 2013, focusing on determining the quantity and quality of visual information needed under both driving modes (i.e., human and autonomous) to navigate the road safely, especially as it pertains t...
Late maturation of visual spatial integration in humans
Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György
1999-01-01
Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600
Head-bobbing behavior in foraging Whooping Cranes
Cronin, T.; Kinloch, M.; Olsen, Glenn H.
2006-01-01
Many species of cursorial birds 'head-bob', that is, they alternately thrust the head forward, then hold it stiII as they walk. Such a motion stabilizes visual fields intermittently and could be critical for visual search; yet the time available for stabilization vs. forward thrust varies with walking speed. Whooping Cranes (Grus americana) are extremely tall birds that visually search the ground for seeds, berries, and small prey. We examined head movements in unrestrained Whooping Cranes using digital video subsequently analyzed with a computer graphical overlay. When foraging, the cranes walk at speeds that allow the head to be held still for at least 50% of the time. This behavior is thought to balance the two needs for covering as much ground as possible and for maximizing the time for visual fixation of the ground in the search for prey. Our results strongly suggest that in cranes, and probably many other bird species, visual fixation of the ground is required for object detection and identification. The thrust phase of the head-bobbing cycle is probably also important for vision. As the head moves forward, the movement generates visual flow and motion parallax, providing visual cues for distances and the relative locations of objects. The eyes commonly change their point of fixation when the head is moving too, suggesting that they remain visually competent throughout the entire cycle of thrust and stabilization.
[Survey on avoidable blindness and visual impairment in Panama].
López, Maritza; Brea, Ileana; Yee, Rita; Yi, Rodolfo; Carles, Víctor; Broce, Alberto; Limburg, Hans; Silva, Juan Carlos
2014-12-01
Determine prevalence of blindness and visual impairment in adults aged ≥ 50 years in Panama, identify their main causes, and characterize eye health services. Cross-sectional population study using standard Rapid Assessment of Avoidable Blindness methodology. Fifty people aged ≥ 50 years were selected from each of 84 clusters chosen through representative random sampling of the entire country. Visual acuity was assessed using a Snellen chart; lens and posterior pole status were assessed by direct ophthalmoscopy. Cataract surgery coverage was calculated and its quality assessed, along with causes of visual acuity < 20/60 and barriers to access to surgical treatment. A total of 4 125 people were examined (98.2% of the calculated sample). Age- and sex-adjusted prevalence of blindness was 3.0% (95% CI: 2.3-3.6). The main cause of blindness was cataract (66.4%), followed by glaucoma (10.2%). Cataract (69.2%) was the main cause of severe visual impairment and uncorrected refractive errors were the main cause of moderate visual impairment (60.7%). Surgical cataract coverage in individuals was 76.3%. Of all eyes operated for cataract, 58.0% achieved visual acuity ≤ 20/60 with available correction. Prevalence of blindness in Panama is in line with average prevalence found in other countries of the Region. This problem can be reduced, since 76.2% of cases of blindness and 85.0% of cases of severe visual impairment result from avoidable causes.
VisBricks: multiform visualization of large, inhomogeneous data.
Lex, Alexander; Schulz, Hans-Jörg; Streit, Marc; Partl, Christian; Schmalstieg, Dieter
2011-12-01
Large volumes of real-world data often exhibit inhomogeneities: vertically in the form of correlated or independent dimensions and horizontally in the form of clustered or scattered data items. In essence, these inhomogeneities form the patterns in the data that researchers are trying to find and understand. Sophisticated statistical methods are available to reveal these patterns, however, the visualization of their outcomes is mostly still performed in a one-view-fits-all manner. In contrast, our novel visualization approach, VisBricks, acknowledges the inhomogeneity of the data and the need for different visualizations that suit the individual characteristics of the different data subsets. The overall visualization of the entire data set is patched together from smaller visualizations, there is one VisBrick for each cluster in each group of interdependent dimensions. Whereas the total impression of all VisBricks together gives a comprehensive high-level overview of the different groups of data, each VisBrick independently shows the details of the group of data it represents. State-of-the-art brushing and visual linking between all VisBricks furthermore allows the comparison of the groupings and the distribution of data items among them. In this paper, we introduce the VisBricks visualization concept, discuss its design rationale and implementation, and demonstrate its usefulness by applying it to a use case from the field of biomedicine. © 2011 IEEE
NASA Technical Reports Server (NTRS)
Taylor, J. H.
1973-01-01
Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Human lateral geniculate nucleus and visual cortex respond to screen flicker.
Krolak-Salmon, Pierre; Hénaff, Marie-Anne; Tallon-Baudry, Catherine; Yvert, Blaise; Guénot, Marc; Vighetto, Alain; Mauguière, François; Bertrand, Olivier
2003-01-01
The first electrophysiological study of the human lateral geniculate nucleus (LGN), optic radiation, striate, and extrastriate visual areas is presented in the context of presurgical evaluation of three epileptic patients (Patients 1, 2, and 3). Visual-evoked potentials to pattern reversal and face presentation were recorded with depth intracranial electrodes implanted stereotactically. For Patient 1, electrode anatomical registration, structural magnetic resonance imaging, and electrophysiological responses confirmed the location of two contacts in the geniculate body and one in the optic radiation. The first responses peaked approximately 40 milliseconds in the LGN in Patient 1 and 60 milliseconds in the V1/V2 complex in Patients 2 and 3. Moreover, steady state visual-evoked potentials evoked by the unperceived but commonly experienced video-screen flicker were recorded in the LGN, optic radiation, and V1/V2 visual areas. This study provides topographic and temporal propagation characteristics of steady state visual-evoked potentials along human visual pathways. We discuss the possible relationship between the oscillating signal recorded in subcortical and cortical areas and the electroencephalogram abnormalities observed in patients suffering from photosensitive epilepsy, particularly video-game epilepsy. The consequences of high temporal frequency visual stimuli delivered by ubiquitous video screens on epilepsy, headaches, and eyestrain must be considered.
Sellers, Kristin K; Bennett, Davis V; Fröhlich, Flavio
2015-02-19
Neuronal firing responses in visual cortex reflect the statistics of visual input and emerge from the interaction with endogenous network dynamics. Artificial visual stimuli presented to animals in which the network dynamics were constrained by anesthetic agents or trained behavioral tasks have provided fundamental understanding of how individual neurons in primary visual cortex respond to input. In contrast, very little is known about the mesoscale network dynamics and their relationship to microscopic spiking activity in the awake animal during free viewing of naturalistic visual input. To address this gap in knowledge, we recorded local field potential (LFP) and multiunit activity (MUA) simultaneously in all layers of primary visual cortex (V1) of awake, freely viewing ferrets presented with naturalistic visual input (nature movie clips). We found that naturalistic visual stimuli modulated the entire oscillation spectrum; low frequency oscillations were mostly suppressed whereas higher frequency oscillations were enhanced. In average across all cortical layers, stimulus-induced change in delta and alpha power negatively correlated with the MUA responses, whereas sensory-evoked increases in gamma power positively correlated with MUA responses. The time-course of the band-limited power in these frequency bands provided evidence for a model in which naturalistic visual input switched V1 between two distinct, endogenously present activity states defined by the power of low (delta, alpha) and high (gamma) frequency oscillatory activity. Therefore, the two mesoscale activity states delineated in this study may define the degree of engagement of the circuit with the processing of sensory input. Copyright © 2014 Elsevier B.V. All rights reserved.
Representational Account of Memory: Insights from Aging and Synesthesia.
Pfeifer, Gaby; Ward, Jamie; Chan, Dennis; Sigala, Natasha
2016-12-01
The representational account of memory envisages perception and memory to be on a continuum rather than in discretely divided brain systems [Bussey, T. J., & Saksida, L. M. Memory, perception, and the ventral visual-perirhinal-hippocampal stream: Thinking outside of the boxes. Hippocampus, 17, 898-908, 2007]. We tested this account using a novel between-group design with young grapheme-color synesthetes, older adults, and young controls. We investigated how the disparate sensory-perceptual abilities between these groups translated into associative memory performance for visual stimuli that do not induce synesthesia. ROI analyses of the entire ventral visual stream showed that associative retrieval (a pair-associate retrieved in the absence of a visual stimulus) yielded enhanced activity in young and older adults' visual regions relative to synesthetes, whereas associative recognition (deciding whether a visual stimulus was the correct pair-associate) was characterized by enhanced activity in synesthetes' visual regions relative to older adults. Whole-brain analyses at associative retrieval revealed an effect of age in early visual cortex, with older adults showing enhanced activity relative to synesthetes and young adults. At associative recognition, the group effect was reversed: Synesthetes showed significantly enhanced activity relative to young and older adults in early visual regions. The inverted group effects observed between retrieval and recognition indicate that reduced sensitivity in visual cortex (as in aging) comes with increased activity during top-down retrieval and decreased activity during bottom-up recognition, whereas enhanced sensitivity (as in synesthesia) shows the opposite pattern. Our results provide novel evidence for the direct contribution of perceptual mechanisms to visual associative memory based on the examples of synesthesia and aging.
Neural codes of seeing architectural styles
Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.
2017-01-01
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture. PMID:28071765
Neural codes of seeing architectural styles.
Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B
2017-01-10
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.
How virtual reality works: illusions of vision in "real" and virtual environments
NASA Astrophysics Data System (ADS)
Stark, Lawrence W.
1995-04-01
Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.
Effects of selection for cooperation and attention in dogs.
Gácsi, Márta; McGreevy, Paul; Kara, Edina; Miklósi, Adám
2009-07-24
It has been suggested that the functional similarities in the socio-cognitive behaviour of dogs and humans emerged as a consequence of comparable environmental selection pressures. Here we use a novel approach to account for the facilitating effect of domestication in dogs and reveal that selection for two factors under genetic influence (visual cooperation and focused attention) may have led independently to increased comprehension of human communicational cues. In Study 1, we observed the performance of three groups of dogs in utilizing the human pointing gesture in a two-way object choice test. We compared breeds selected to work while visually separated from human partners (N = 30, 21 breeds, clustered as independent worker group), with those selected to work in close cooperation and continuous visual contact with human partners (N = 30, 22 breeds, clustered as cooperative worker group), and with a group of mongrels (N = 30).Secondly, it has been reported that, in dogs, selective breeding to produce an abnormal shortening of the skull is associated with a more pronounced area centralis (location of greatest visual acuity). In Study 2, breeds with high cephalic index and more frontally placed eyes (brachycephalic breeds, N = 25, 14 breeds) were compared with breeds with low cephalic index and laterally placed eyes (dolichocephalic breeds, N = 25, 14 breeds). In Study 1, cooperative workers were significantly more successful in utilizing the human pointing gesture than both the independent workers and the mongrels.In study 2, we found that brachycephalic dogs performed significantly better than dolichocephalic breeds. After controlling for environmental factors, we have provided evidence that at least two independent phenotypic traits with certain genetic variability affect the ability of dogs to rely on human visual cues. This finding should caution researchers against making simple generalizations about the effects of domestication and on dog-wolf differences in the utilization of human visual signals.
Duncan, Robert O; Sample, Pamela A; Bowd, Christopher; Weinreb, Robert N; Zangwill, Linda M
2012-05-01
Altered metabolic activity has been identified as a potential contributing factor to the neurodegeneration associated with primary open angle glaucoma (POAG). Consequently, we sought to determine whether there is a relationship between the loss of visual function in human glaucoma and resting blood perfusion within primary visual cortex (V1). Arterial spin labeling (ASL) functional magnetic resonance imaging (fMRI) was conducted in 10 participants with POAG. Resting cerebral blood flow (CBF) was measured from dorsal and ventral V1. Behavioral measurements of visual function were obtained using standard automated perimetry (SAP), short-wavelength automated perimetry (SWAP), and frequency-doubling technology perimetry (FDT). Measurements of CBF were compared to differences in visual function for the superior and inferior hemifield. Differences in CBF between ventral and dorsal V1 were correlated with differences in visual function for the superior versus inferior visual field. A statistical bootstrapping analysis indicated that the observed correlations between fMRI responses and measurements of visual function for SAP (r=0.49), SWAP (r=0.63), and FDT (r=0.43) were statistically significant (all p<0.05). Resting blood perfusion in human V1 is correlated with the loss of visual function in POAG. Altered CBF may be a contributing factor to glaucomatous optic neuropathy, or it may be an indication of post-retinal glaucomatous neurodegeneration caused by damage to the retinal ganglion cells. Copyright © 2012 Elsevier Ltd. All rights reserved.
Conditioned place preferences in humans using virtual reality.
Astur, Robert S; Carew, Andrew W; Deaton, Bonnie E
2014-07-01
To extend a standard paradigm of conditioning in nonhumans to humans, we created a virtual reality (VR) conditioned place preference task, with real-life food rewards. Undergraduates were placed into a VR environment consisting of 2 visually distinct rooms. On Day 1, participants underwent 6 pairing sessions in which they were confined into one of the two rooms and explored the VR environment. Room A was paired with real-life M&Ms for 3 sessions, and Room B was paired with no food for 3 sessions. Day 2 was the test day, administered the next day, and participants were given free access to the entire VR environment for 5min. In experiment 1, participants were food restricted, and we observed that on the test day, participants display a significant conditioned place preference for the VR room previously paired with food (p<0.001). Additionally, they display a significant explicit preference for the M&M-paired room in a forced-choice of "Which room do you like best?". In experiment 2, when participants were not food restricted, there was no evidence of a place preference, either implicitly (e.g. dwell time) or explicitly. Hence, we show that we can reliably establish a place preference in humans, but that the preference is contingent on the participants' hunger state. Future research will examine the extent to which these preferences can be blocked or extinguished as well as whether these preferences are evident using other reinforcers. Copyright © 2014 Elsevier B.V. All rights reserved.
Amsterdam, A; Berkowitz, A; Nimrod, A; Kohen, F
1980-01-01
The temporal relationship between redistribution of receptors to lutropin (luteinizing hormone)/human chorionic gonadotropin in cultured rat ovarian granulosa cells and the cellular response to hormonal challenge were studied. Visualization of receptor-bound human chorionic gonadotropin by indirect immunofluorescence, with hormone-specific antibodies after fixation with 2% formaldehyde, revealed the existence of small clusters around the entire cell circumference 5--20 min after exposure to the hormone at 37 degrees C. Such small receptor aggregates were also evident if hormone incubation was at 4 degrees C or if cells were fixed with 2% formaldehyde before incubation. Larger clusters were evident after prolonged incubation with the hormone (2--4 hr) at 37 degrees C. The later change coincided with diminished cyclic AMP accumulation in respose to challenge with fresh hormone. When the fixation step was omitted and antibodies to human chorionic gonadotropin were applied after hormonal binding, acceleration of both receptor clustering and the desensitization process was observed. This maneuver also induced capping of the hormone receptors. In contrast, monovalent Fab' fragments of the antibodies were without effect. Internalization of the bound hormone in lysosomes, and subsequent degradation, was evident 8 hr after hormonal application and was not accelerated by the antibodies. It is suggested that clustering of the luteinizing hormone receptors may play a role in cellular responsiveness to the hormone. Massive aggregation of the receptors may desensitize the cell by interferring with coupling to adenylate cyclase. Images PMID:6251459
ERIC Educational Resources Information Center
Bulf, Hermann; de Hevia, Maria Dolores; Macchi Cassia, Viola
2016-01-01
Numbers are represented as ordered magnitudes along a spatially oriented number line. While culture and formal education modulate the direction of this number-space mapping, it is a matter of debate whether its emergence is entirely driven by cultural experience. By registering 8-9-month-old infants' eye movements, this study shows that numerical…
2002-11-14
KENNEDY SPACE CENTER, FLA. -- Workers on Launch Pad 39A perform checks on Endeavour's oxygen flex hose fitting through manual inspection and using helium detectors. Visual inspection found a deformity in the flex line braid where it connects to rigid tubing. The entire flex hose assembly and bulkhead fitting were removed early today, and work is under way to complete the installation of a replacement.
Rasquin, F
2007-01-01
Crystalline retinopathy is characterized by intraretinal crystalline deposits that, according to their etiology, can be localized in the macular area or, indeed, be found in the entire retina. These deposits can be associated or not to visual loss and electrophysiological perturbations. Among the toxic drugs leading to this retinopathy are tamoxifen, canthaxanthine, methoxyflurane, talc and nitrofurantoin. A detailed description of tamoxifen and canthaxanthine toxicity is reported in this chapter.
Forest Plots in Excel: Moving beyond a Clump of Trees to a Forest of Visual Information
ERIC Educational Resources Information Center
Derzon, James H.; Alford, Aaron A.
2013-01-01
Forest plots provide an effective means of presenting a wealth of information in a single graphic. Whether used to illustrate multiple results in a single study or the cumulative knowledge of an entire field, forest plots have become an accepted and generally understood way of presenting many estimates simultaneously. This article explores…
A PDE approach for quantifying and visualizing tumor progression and regression
NASA Astrophysics Data System (ADS)
Sintay, Benjamin J.; Bourland, J. Daniel
2009-02-01
Quantification of changes in tumor shape and size allows physicians the ability to determine the effectiveness of various treatment options, adapt treatment, predict outcome, and map potential problem sites. Conventional methods are often based on metrics such as volume, diameter, or maximum cross sectional area. This work seeks to improve the visualization and analysis of tumor changes by simultaneously analyzing changes in the entire tumor volume. This method utilizes an elliptic partial differential equation (PDE) to provide a roadmap of boundary displacement that does not suffer from the discontinuities associated with other measures such as Euclidean distance. Streamline pathways defined by Laplace's equation (a commonly used PDE) are used to track tumor progression and regression at the tumor boundary. Laplace's equation is particularly useful because it provides a smooth, continuous solution that can be evaluated with sub-pixel precision on variable grid sizes. Several metrics are demonstrated including maximum, average, and total regression and progression. This method provides many advantages over conventional means of quantifying change in tumor shape because it is observer independent, stable for highly unusual geometries, and provides an analysis of the entire three-dimensional tumor volume.
Chung-Davidson, Yu-Wen; Davidson, Peter J.; Scott, Anne M.; Walaszczyk, Erin J.; Brant, Cory O.; Buchinger, Tyler; Johnson, Nicholas S.; Li, Weiming
2014-01-01
Biliary atresia is a rare disease of infancy, with an estimated 1 in 15,000 frequency in the southeast United States, but more common in East Asian countries, with a reported frequency of 1 in 5,000 in Taiwan. Although much is known about the management of biliary atresia, its pathogenesis is still elusive. The sea lamprey (Petromyzon marinus) provides a unique opportunity to examine the mechanism and progression of biliary degeneration. Sea lamprey develop through three distinct life stages: larval, parasitic, and adult. During the transition from larvae to parasitic juvenile, sea lamprey undergo metamorphosis with dramatic reorganization and remodeling in external morphology and internal organs. In the liver, the entire biliary system is lost, including the gall bladder and the biliary tree. A newly-developed method called “CLARITY” was modified to clarify the entire liver and the junction with the intestine in metamorphic sea lamprey. The process of biliary degeneration was visualized and discerned during sea lamprey metamorphosis by using laser scanning confocal microscopy. This method provides a powerful tool to study biliary atresia in a unique animal model.
Noninvasive studies of human visual cortex using neuromagnetic techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aine, C.J.; George, J.S.; Supek, S.
1990-01-01
The major goals of noninvasive studies of the human visual cortex are: to increase knowledge of the functional organization of cortical visual pathways; and to develop noninvasive clinical tests for the assessment of cortical function. Noninvasive techniques suitable for studies of the structure and function of human visual cortex include magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission tomography (SPECT), scalp recorded event-related potentials (ERPs), and event-related magnetic fields (ERFs). The primary challenge faced by noninvasive functional measures is to optimize the spatial and temporal resolution of the measurement and analytic techniques in order to effectively characterizemore » the spatial and temporal variations in patterns of neuronal activity. In this paper we review the use of neuromagnetic techniques for this purpose. 8 refs., 3 figs.« less
Poggel, Dorothe A.; Treutwein, Bernhard; Sabel, Bernhard A.; Strasburger, Hans
2015-01-01
The issue of how basic sensory and temporal processing are related is still unresolved. We studied temporal processing, as assessed by simple visual reaction times (RT) and double-pulse resolution (DPR), in patients with partial vision loss after visual pathway lesions and investigated whether vision restoration training (VRT), a training program designed to improve light detection performance, would also affect temporal processing. Perimetric and campimetric visual field tests as well as maps of DPR thresholds and RT were acquired before and after a 3 months training period with VRT. Patient performance was compared to that of age-matched healthy subjects. Intact visual field size increased during training. Averaged across the entire visual field, DPR remained constant while RT improved slightly. However, in transition zones between the blind and intact areas (areas of residual vision) where patients had shown between 20 and 80% of stimulus detection probability in pre-training visual field tests, both DPR and RT improved markedly. The magnitude of improvement depended on the defect depth (or degree of intactness) of the respective region at baseline. Inter-individual training outcome variability was very high, with some patients showing little change and others showing performance approaching that of healthy controls. Training-induced improvement of light detection in patients with visual field loss thus generalized to dynamic visual functions. The findings suggest that similar neural mechanisms may underlie the impairment and subsequent training-induced functional recovery of both light detection and temporal processing. PMID:25717307
Cognitive issues in searching images with visual queries
NASA Astrophysics Data System (ADS)
Yu, ByungGu; Evens, Martha W.
1999-01-01
In this paper, we propose our image indexing technique and visual query processing technique. Our mental images are different from the actual retinal images and many things, such as personal interests, personal experiences, perceptual context, the characteristics of spatial objects, and so on, affect our spatial perception. These private differences are propagated into our mental images and so our visual queries become different from the real images that we want to find. This is a hard problem and few people have tried to work on it. In this paper, we survey the human mental imagery system, the human spatial perception, and discuss several kinds of visual queries. Also, we propose our own approach to visual query interpretation and processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A
Interactive data visualization leverages human visual perception and cognition to improve the accuracy and effectiveness of data analysis. When combined with automated data analytics, data visualization systems orchestrate the strengths of humans with the computational power of machines to solve problems neither approach can manage in isolation. In the intelligent transportation system domain, such systems are necessary to support decision making in large and complex data streams. In this chapter, we provide an introduction to several key topics related to the design of data visualization systems. In addition to an overview of key techniques and strategies, we will describe practicalmore » design principles. The chapter is concluded with a detailed case study involving the design of a multivariate visualization tool.« less
Guidance of visual attention by semantic information in real-world scenes
Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc
2014-01-01
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724
ERIC Educational Resources Information Center
Forzano, Lori-Ann B.; Chelonis, John J.; Casey, Caitlin; Forward, Marion; Stachowiak, Jacqueline A.; Wood, Jennifer
2010-01-01
Self-control can be defined as the choice of a larger, more delayed reinforcer over a smaller, less delayed reinforcer, and impulsiveness as the opposite. Previous research suggests that exposure to visual food cues affects adult humans' self-control. Previous research also suggests that food deprivation decreases adult humans' self-control. The…
Lack of oblique astigmatism in the chicken eye.
Maier, Felix M; Howland, Howard C; Ohlendorf, Arne; Wahl, Siegfried; Schaeffel, Frank
2015-04-01
Primate eyes display considerable oblique off-axis astigmatism which could provide information on the sign of defocus that is needed for emmetropization. The pattern of peripheral astigmatism is not known in the chicken eye, a common model of myopia. Peripheral astigmatism was mapped out over the horizontal visual field in three chickens, 43 days old, and in three near emmetropic human subjects, average age 34.7years, using infrared photoretinoscopy. There were no differences in astigmatism between humans and chickens in the central visual field (chicks -0.35D, humans -0.65D, n.s.) but large differences in the periphery (i.e. astigmatism at 40° in the temporal visual field: humans -4.21D, chicks -0.63D, p<0.001, unpaired t-test). The lack of peripheral astigmatism in chicks was not due to differences in corneal shape. Perhaps related to their superior peripheral optics, we found that chickens had excellent visual performance also in the far periphery. Using an automated optokinetic nystagmus paradigm, no difference was observed in spatial visual performance with vision restricted to either the central 67° of the visual field or to the periphery beyond 67°. Accommodation was elicited by stimuli presented far out in the visual field. Transscleral images of single infrared LEDs showed no sign of peripheral astigmatism. The chick may be the first terrestrial vertebrate described to lack oblique astigmatism. Since corneal shape cannot account for the difference in astigmatism in humans and chicks, it must trace back to the design of the crystalline lens. The lack of peripheral astigmatism in chicks also excludes a role in emmetropization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visual Image Sensor Organ Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.
2014-01-01
This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.
Visual preference in a human-reared agile gibbon (Hylobates agilis).
Tanaka, Masayuki; Uchikoshi, Makiko
2010-01-01
Visual preference was evaluated in a male agile gibbon. The subject was raised by humans immediately after birth, but lived with his biological family from one year of age. Visual preference was assessed using a free-choice task in which five or six photographs of different primate species, including humans, were presented on a touch-sensitive screen. The subject touched one of them. Food rewards were delivered irrespective of the subject's responses. We prepared two types of stimulus sets. With set 1, the subject touched photographs of humans more frequently than those of other species, recalling previous findings in human-reared chimpanzees. With set 2, photographs of nine species of gibbons were presented. Chimpanzees touched photographs of white-handed gibbons more than those of other gibbon species. The gibbon subject initially touched photographs of agile gibbons more than white-handed gibbons, but after one and two years his choice patterns resembled the chimpanzees'. The results suggest that, as in chimpanzees, visual preferences of agile gibbons are not genetically programmed but develop through social experience during infancy.
ERIC Educational Resources Information Center
Fischer, Quentin S.; Aleem, Salman; Zhou, Hongyi; Pham, Tony A.
2007-01-01
Prolonged visual deprivation from early childhood to maturity is believed to cause permanent visual impairment. However, there have been case reports of substantial improvement of binocular vision in human adults following lifelong visual impairment or deprivation. These observations, together with recent findings of adult ocular dominance…
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.
Crouser, R J; Chang, R
2012-12-01
Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.
Dogs respond appropriately to cues of humans' attentional focus.
Virányi, Zsófia; Topál, József; Gácsi, Márta; Miklósi, Adám; Csányi, Vilmos
2004-05-31
Dogs' ability to recognise cues of human visual attention was studied in different experiments. Study 1 was designed to test the dogs' responsiveness to their owner's tape-recorded verbal commands (Down!) while the Instructor (who was the owner of the dog) was facing either the dog or a human partner or none of them, or was visually separated from the dog. Results show that dogs were more ready to follow the command if the Instructor attended them during instruction compared to situations when the Instructor faced the human partner or was out of sight of the dog. Importantly, however, dogs showed intermediate performance when the Instructor was orienting into 'empty space' during the re-played verbal commands. This suggests that dogs are able to differentiate the focus of human attention. In Study 2 the same dogs were offered the possibility to beg for food from two unfamiliar humans whose visual attention (i.e. facing the dog or turning away) was systematically varied. The dogs' preference for choosing the attentive person shows that dogs are capable of using visual cues of attention to evaluate the human actors' responsiveness to solicit food-sharing. The dogs' ability to understand the communicatory nature of the situations is discussed in terms of their social cognitive skills and unique evolutionary history.
Development of Preference for Conspecific Faces in Human Infants
ERIC Educational Resources Information Center
Sanefuji, Wakako; Wada, Kazuko; Yamamoto, Tomoka; Mohri, Ikuko; Taniike, Masako
2014-01-01
Previous studies have proposed that humans may be born with mechanisms that attend to conspecifics. However, as previous studies have relied on stimuli featuring human adults, it remains unclear whether infants attend only to adult humans or to the entire human species. We found that 1-month-old infants (n = 23) were able to differentiate between…
Overview of Human-Centric Space Situational Awareness (SSA) Science and Technology (S&T)
NASA Astrophysics Data System (ADS)
Ianni, J.; Aleva, D.; Ellis, S.
2012-09-01
A number of organizations, within the government, industry, and academia, are researching ways to help humans understand and react to events in space. The problem is both helped and complicated by the fact that there are numerous data sources that need to be planned (i.e., tasked), collected, processed, analyzed, and disseminated. A large part of the research is in support of the Joint Space Operational Center (JSpOC), National Air and Space Intelligence Center (NASIC), and similar organizations. Much recent research has been specifically targeting the JSpOC Mission System (JMS) which has provided a unifying software architecture. This paper will first outline areas of science and technology (S&T) related to human-centric space situational awareness (SSA) and space command and control (C2) including: 1. Object visualization - especially data fused from disparate sources. Also satellite catalog visualizations that convey the physical relationships between space objects. 2. Data visualization - improve data trend analysis as in visual analytics and interactive visualization; e.g., satellite anomaly trends over time, space weather visualization, dynamic visualizations 3. Workflow support - human-computer interfaces that encapsulate multiple computer services (i.e., algorithms, programs, applications) into a 4. Command and control - e.g., tools that support course of action (COA) development and selection, tasking for satellites and sensors, etc. 5. Collaboration - improve individuals or teams ability to work with others; e.g., video teleconferencing, shared virtual spaces, file sharing, virtual white-boards, chat, and knowledge search. 6. Hardware/facilities - e.g., optimal layouts for operations centers, ergonomic workstations, immersive displays, interaction technologies, and mobile computing. Secondly we will provide a survey of organizations working these areas and suggest where more attention may be needed. Although no detailed master plan exists for human-centric SSA and C2, we see little redundancy among the groups supporting SSA human factors at this point.
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Suzuki, Naoki; Hattori, Asaki; Hashizume, Makoto
2016-01-01
We constructed a four dimensional human model that is able to visualize the structure of a whole human body, including the inner structures, in real-time to allow us to analyze human dynamic changes in the temporal, spatial and quantitative domains. To verify whether our model was generating changes according to real human body dynamics, we measured a participant's skin expansion and compared it to that of the model conducted under the same body movement. We also made a contribution to the field of orthopedics, as we were able to devise a display method that enables the observer to more easily observe the changes made in the complex skeletal muscle system during body movements, which in the past were difficult to visualize.
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Estevez, José; Kaidonis, Georgia; Henderson, Tim; Craig, Jamie E; Landers, John
2018-01-01
Visual impairment significantly impairs the length and quality of life, but little is known of its impact in Indigenous Australians. To investigate the association of disease-specific causes of visual impairment with all-cause mortality. A retrospective cohort analysis. A total of 1347 Indigenous Australians aged over 40 years. Participants visiting remote medical clinics underwent clinical examinations including visual acuity, subjective refraction and slit-lamp examination of the anterior and posterior segments. The major ocular cause of visual impairment was determined. Patients were assessed periodically in these remote clinics for the succeeding 10 years after recruitment. Mortality rates were obtained from relevant departments. All-cause 10-year mortality and its association with disease-specific causes of visual impairment. The all-cause mortality rate for the entire cohort was 29.3% at the 10-year completion of follow-up. Of those with visual impairment, the overall mortality rate was 44.9%. The mortality rates differed for those with visual impairment due to cataract (59.8%), diabetic retinopathy (48.4%), trachoma (46.6%), 'other' (36.2%) and refractive error (33.4%) (P < 0.0001). Only those with visual impairment from diabetic retinopathy were any more likely to die during the 10 years of follow-up when compared with those without visual impairment (HR 1.70; 95% CI, 1.00-2.87; P = 0.049). Visual impairment was associated with all-cause mortality in a cohort of Indigenous Australians. However, diabetic retinopathy was the only ocular disease that significantly increased the risk of mortality. Visual impairment secondary to diabetic retinopathy may be an important predictor of mortality. © 2017 Royal Australian and New Zealand College of Ophthalmologists.
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Pinto, Joshua G. A.; Jones, David G.; Williams, C. Kate; Murphy, Kathryn M.
2015-01-01
Although many potential neuroplasticity based therapies have been developed in the lab, few have translated into established clinical treatments for human neurologic or neuropsychiatric diseases. Animal models, especially of the visual system, have shaped our understanding of neuroplasticity by characterizing the mechanisms that promote neural changes and defining timing of the sensitive period. The lack of knowledge about development of synaptic plasticity mechanisms in human cortex, and about alignment of synaptic age between animals and humans, has limited translation of neuroplasticity therapies. In this study, we quantified expression of a set of highly conserved pre- and post-synaptic proteins (Synapsin, Synaptophysin, PSD-95, Gephyrin) and found that synaptic development in human primary visual cortex (V1) continues into late childhood. Indeed, this is many years longer than suggested by neuroanatomical studies and points to a prolonged sensitive period for plasticity in human sensory cortex. In addition, during childhood we found waves of inter-individual variability that are different for the four proteins and include a stage during early development (<1 year) when only Gephyrin has high inter-individual variability. We also found that pre- and post-synaptic protein balances develop quickly, suggesting that maturation of certain synaptic functions happens within the 1 year or 2 of life. A multidimensional analysis (principle component analysis) showed that most of the variance was captured by the sum of the four synaptic proteins. We used that sum to compare development of human and rat visual cortex and identified a simple linear equation that provides robust alignment of synaptic age between humans and rats. Alignment of synaptic ages is important for age-appropriate targeting and effective translation of neuroplasticity therapies from the lab to the clinic. PMID:25729353
Salient sounds activate human visual cortex automatically.
McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A
2013-05-22
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Salient sounds activate human visual cortex automatically
McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.
2013-01-01
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
Are New Image Quality Figures of Merit Needed for Flat Panel Displays?
1998-06-01
American National Standard for Human Factors Engineering of Visual Display Terminal Workstations in 1988 have adopted the MTFA as the standard...References American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI/HFS 100-1988). 1988. Santa Monica
Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I
2017-06-01
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.
Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.
2017-01-01
Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201
Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.
Xiong, Chunshui; Huang, Lei; Liu, Changping
2014-01-01
Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
Timing of target discrimination in human frontal eye fields.
O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent
2004-01-01
Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.
Optoacoustic imaging in five dimensions
NASA Astrophysics Data System (ADS)
Deán-Ben, X. L.; Gottschalk, Sven; Fehm, Thomas F.; Razansky, Daniel
2015-03-01
We report on an optoacoustic imaging system capable of acquiring volumetric multispectral optoacoustic data in real time. The system is based on simultaneous acquisition of optoacoustic signals from 256 different tomographic projections by means of a spherical matrix array. Thereby, volumetric reconstructions can be done at high frame rate, only limited by the pulse repetition rate of the laser. The developed tomographic approach presents important advantages over previously reported systems that use scanning for attaining volumetric optoacoustic data. First, dynamic processes, such as the biodistribution of optical biomarkers, can be monitored in the entire volume of interest. Second, out-of-plane and motion artifacts that could degrade the image quality when imaging living specimens can be avoided. Finally, real-time 3D performance can obviously save time required for experimental and clinical observations. The feasibility of optoacoustic imaging in five dimensions, i.e. real time acquisition of volumetric datasets at multiple wavelengths, is reported. In this way, volumetric images of spectrally resolved chromophores are rendered in real time, thus offering an unparallel imaging performance among the current bio-imaging modalities. This performance is subsequently showcased by video-rate visualization of in vivo hemodynamic changes in mouse brain and handheld visualization of blood oxygenation in deep human vessels. The newly discovered capacities open new prospects for translating the optoacoustic technology into highly performing imaging modality for biomedical research and clinical practice with multiple applications envisioned, from cardiovascular and cancer diagnostics to neuroimaging and ophthalmology.
Procedural wound geometry and blood flow generation for medical training simulators
NASA Astrophysics Data System (ADS)
Aras, Rifat; Shen, Yuzhong; Li, Jiang
2012-02-01
Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.
Büttner, Oliver B; Wieber, Frank; Schulz, Anna Maria; Bayer, Ute C; Florack, Arnd; Gollwitzer, Peter M
2014-10-01
Mindset theory suggests that a deliberative mindset entails openness to information in one's environment, whereas an implemental mindset entails filtering of information. We hypothesized that this open- versus closed-mindedness influences individuals' breadth of visual attention. In Studies 1 and 2, we induced an implemental or deliberative mindset, and measured breadth of attention using participants' length estimates of x-winged Müller-Lyer figures. Both studies demonstrate a narrower breadth of attention in the implemental mindset than in the deliberative mindset. In Study 3, we manipulated participants' mindsets and measured the breadth of attention by tracking eye movements during scene perception. Implemental mindset participants focused on foreground objects, whereas deliberative mindset participants attended more evenly to the entire scene. Our findings imply that deliberative versus implemental mindsets already operate at the level of visual attention. © 2014 by the Society for Personality and Social Psychology, Inc.
Selective attention within the foveola.
Poletti, Martina; Rucci, Michele; Carrasco, Marisa
2017-10-01
Efficient control of attentional resources and high-acuity vision are both fundamental for survival. Shifts in visual attention are known to covertly enhance processing at locations away from the center of gaze, where visual resolution is low. It is unknown, however, whether selective spatial attention operates where the observer is already looking-that is, within the high-acuity foveola, the small yet disproportionally important rod-free region of the retina. Using new methods for precisely controlling retinal stimulation, here we show that covert attention flexibly improves and speeds up both detection and discrimination at loci only a fraction of a degree apart within the foveola. These findings reveal a surprisingly precise control of attention and its involvement in fine spatial vision. They show that the commonly studied covert shifts of attention away from the fovea are the expression of a global mechanism that exerts its action across the entire visual field.
Visual Exploration of Genetic Association with Voxel-based Imaging Phenotypes in an MCI/AD Study
Kim, Sungeun; Shen, Li; Saykin, Andrew J.; West, John D.
2010-01-01
Neuroimaging genomics is a new transdisciplinary research field, which aims to examine genetic effects on brain via integrated analyses of high throughput neuroimaging and genomic data. We report our recent work on (1) developing an imaging genomic browsing system that allows for whole genome and entire brain analyses based on visual exploration and (2) applying the system to the imaging genomic analysis of an existing MCI/AD cohort. Voxel-based morphometry is used to define imaging phenotypes. ANCOVA is employed to evaluate the effect of the interaction of genotypes and diagnosis in relation to imaging phenotypes while controlling for relevant covariates. Encouraging experimental results suggest that the proposed system has substantial potential for enabling discovery of imaging genomic associations through visual evaluation and for localizing candidate imaging regions and genomic regions for refined statistical modeling. PMID:19963597
Effects of strategy on visual working memory capacity
Bengson, Jesse J.; Luck, Steven J.
2015-01-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity. PMID:26139356
Retinal Origin of Direction Selectivity in the Superior Colliculus
Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua
2017-01-01
Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394
Effects of strategy on visual working memory capacity.
Bengson, Jesse J; Luck, Steven J
2016-02-01
Substantial evidence suggests that individual differences in estimates of working memory capacity reflect differences in how effectively people use their intrinsic storage capacity. This suggests that estimated capacity could be increased by instructions that encourage more effective encoding strategies. The present study tested this by giving different participants explicit strategy instructions in a change detection task. Compared to a condition in which participants were simply told to do their best, we found that estimated capacity was increased for participants who were instructed to remember the entire visual display, even at set sizes beyond their capacity. However, no increase in estimated capacity was found for a group that was told to focus on a subset of the items in supracapacity arrays. This finding confirms the hypothesis that encoding strategies may influence visual working memory performance, and it is contrary to the hypothesis that the optimal strategy is to filter out any items beyond the storage capacity.
Selective attention within the foveola
Poletti, Martina; Rucci, Michele; Carrasco, Marisa
2018-01-01
Efficient control of attentional resources and high-acuity vision are both fundamental for survival. Shifts in visual attention are known to covertly enhance processing at locations away from the center of gaze, where visual resolution is low. It is unknown, however, whether selective spatial attention operates where the observer already looks, i.e., within the high-acuity foveola, the small, yet disproportionally important rod-free region of the retina. Using new methods for precisely controlling retinal stimulation, here we show that covert attention flexibly improves and speeds-up both detection and discrimination at loci only a fraction of a degree apart within the foveola. These findings reveal a surprisingly precise control of attention and its involvement in fine spatial vision. They show that the commonly studied covert shifts of attention away from the fovea are the expression of a global mechanism that exerts its action across the entire visual field. PMID:28805816
NASA Technical Reports Server (NTRS)
Stewart, E. C.; Cannaday, R. L.
1973-01-01
A comparison of the results from a fixed-base, six-degree-of -freedom simulator and a moving-base, three-degree-of-freedom simulator was made for a close-in, EVA-type maneuvering task in which visual cues of a target spacecraft were used for guidance. The maneuvering unit (the foot-controlled maneuvering unit of Skylab Experiment T020) employed an on-off acceleration command control system operated entirely by the feet. Maneuvers by two test subjects were made for the fixed-base simulator in six and three degrees of freedom and for the moving-base simulator in uncontrolled and controlled, EVA-type visual cue conditions. Comparisons of pilot ratings and 13 different quantitative parameters from the two simulators are made. Different results were obtained from the two simulators, and the effects of limited degrees of freedom and uncontrolled visual cues are discussed.
Trial and error: how the unclonable human mitochondrial genome was cloned in yeast.
Bigger, Brian W; Liao, Ai-Yin; Sergijenko, Ana; Coutelle, Charles
2011-11-01
Development of a human mitochondrial gene delivery vector is a critical step in the ability to treat diseases arising from mutations in mitochondrial DNA. Although we have previously cloned the mouse mitochondrial genome in its entirety and developed it as a mitochondrial gene therapy vector, the human mitochondrial genome has been dubbed unclonable in E. coli, due to regions of instability in the D-loop and tRNA(Thr) gene. We tested multi- and single-copy vector systems for cloning human mitochondrial DNA in E. coli and Saccharomyces cerevisiae, including transformation-associated recombination. Human mitochondrial DNA is unclonable in E. coli and cannot be retained in multi- or single-copy vectors under any conditions. It was, however, possible to clone and stably maintain the entire human mitochondrial genome in yeast as long as a single-copy centromeric plasmid was used. D-loop and tRNA(Thr) were both stable and unmutated. This is the first report of cloning the entire human mitochondrial genome and the first step in developing a gene delivery vehicle for human mitochondrial gene therapy.
Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis
2014-07-01
Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.
A Visual Analytic for Improving Human Terrain Understanding
2013-06-01
Kim, S., Minotra, D., Strater, L ., Cuevas, and Colombo, D. “Knowledge Visualization to Enhance Human-Agent Situation Awareness within a Computational...1971). A General Coefficient of Similarity and Some of Its Properties Biometrics, Vol. 27, No. 4, pp. 857-871. [14] Coppock, S. & Mazlack, L ...and allow human interpretation. HDPT Component Overview PostgreSQL DBS Apache Tomcat Web Server [’...... _./ Globa l Graph Web ~ Application
Splitting Attention across the Two Visual Fields in Visual Short-Term Memory
ERIC Educational Resources Information Center
Delvenne, Jean-Francois; Holt, Jessica L.
2012-01-01
Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In…
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada
2013-01-01
Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031
Multilevel depth and image fusion for human activity detection.
Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng
2013-10-01
Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.
TankSIM: A Cryogenic Tank Performance Prediction Program
NASA Technical Reports Server (NTRS)
Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Moder, J. P.; Schnell, A. R.; Sutherlin, S. G.
2015-01-01
Developed for predicting the behavior of cryogenic liquids inside propellant tanks under various environmental and operating conditions. Provides a multi-node analysis of pressurization, ullage venting and thermodynamic venting systems (TVS) pressure control using axial jet or spray bar TVS. Allows user to combine several different phases for predicting the liquid behavior for the entire flight mission timeline or part of it. Is a NASA in-house code, based on FORTRAN 90-95 and Intel Visual FORTRAN compiler, but can be used on any other platform (Unix-Linux, Compaq Visual FORTRAN, etc.). The last Version 7, released on December 2014, included detailed User's Manual. Includes the use of several RefPROP subroutines for calculating fluid properties.
Matthews, Andrew G
2004-08-01
It is conservatively estimated that some form of lens opacity is present in 5% to 7% of horses with otherwise clinically normal eyes.These opacities can range from small epicapsular remnants of the fetal vasculature to dense and extensive cataract. A cataract is defined technically as any opacity or alteration in the optical homogeneity of the lens involving one or more of the following: anterior epithelium, capsule, cortex, or nucleus. In the horse, cataracts rarely involve the entire lens structure (ie, complete cataracts) and are more usually localized to one anatomic landmark or sector of the lens. Complete cataracts are invariably associated with overt and significant visual disability. Focal or incomplete cataracts alone seldom cause any apparent visual dysfunction in affected horses,however.
Visual Processing of Object Velocity and Acceleration
1994-02-04
A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364
The Visual System of Zebrafish and its Use to Model Human Ocular Diseases
Gestri, Gaia; Link, Brian A; Neuhauss, Stephan CF
2011-01-01
Free swimming zebrafish larvae depend mainly on their sense of vision to evade predation and to catch prey. Hence there is strong selective pressure on the fast maturation of visual function and indeed the visual system already supports a number of visually-driven behaviors in the newly hatched larvae. The ability to exploit the genetic and embryonic accessibility of the zebrafish in combination with a behavioral assessment of visual system function has made the zebrafish a popular model to study vision and its diseases. Here, we review the anatomy, physiology and development of the zebrafish eye as the basis to relate the contributions of the zebrafish to our understanding of human ocular diseases. PMID:21595048
Complete scanpaths analysis toolbox.
Augustyniak, Piotr; Mikrut, Zbigniew
2006-01-01
This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.
Neural Mechanisms of Information Storage in Visual Short-Term Memory
Serences, John T.
2016-01-01
The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. PMID:27668990
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
Subtle changes in the landmark panorama disrupt visual navigation in a nocturnal bull ant
2017-01-01
The ability of ants to navigate when the visual landmark information is altered has often been tested by creating large and artificial discrepancies in their visual environment. Here, we had an opportunity to slightly modify the natural visual environment around the nest of the nocturnal bull ant Myrmecia pyriformis. We achieved this by felling three dead trees, two located along the typical route followed by the foragers of that particular nest and one in a direction perpendicular to their foraging direction. An image difference analysis showed that the change in the overall panorama following the removal of these trees was relatively little. We filmed the behaviour of ants close to the nest and tracked their entire paths, both before and after the trees were removed. We found that immediately after the trees were removed, ants walked slower and were less directed. Their foraging success decreased and they looked around more, including turning back to look towards the nest. We document how their behaviour changed over subsequent nights and discuss how the ants may detect and respond to a modified visual environment in the evening twilight period. This article is part of the themed issue ‘Vision in dim light’. PMID:28193813
Neural mechanisms of information storage in visual short-term memory.
Serences, John T
2016-11-01
The capacity to briefly memorize fleeting sensory information supports visual search and behavioral interactions with relevant stimuli in the environment. Traditionally, studies investigating the neural basis of visual short term memory (STM) have focused on the role of prefrontal cortex (PFC) in exerting executive control over what information is stored and how it is adaptively used to guide behavior. However, the neural substrates that support the actual storage of content-specific information in STM are more controversial, with some attributing this function to PFC and others to the specialized areas of early visual cortex that initially encode incoming sensory stimuli. In contrast to these traditional views, I will review evidence suggesting that content-specific information can be flexibly maintained in areas across the cortical hierarchy ranging from early visual cortex to PFC. While the factors that determine exactly where content-specific information is represented are not yet entirely clear, recognizing the importance of task-demands and better understanding the operation of non-spiking neural codes may help to constrain new theories about how memories are maintained at different resolutions, across different timescales, and in the presence of distracting information. Copyright © 2016 Elsevier Ltd. All rights reserved.
Differential effects of delay upon visually and haptically guided grasping and perceptual judgments.
Pettypiece, Charles E; Culham, Jody C; Goodale, Melvyn A
2009-05-01
Experiments with visual illusions have revealed a dissociation between the systems that mediate object perception and those responsible for object-directed action. More recently, an experiment on a haptic version of the visual size-contrast illusion has provided evidence for the notion that the haptic modality shows a similar dissociation when grasping and estimating the size of objects in real-time. Here we present evidence suggesting that the similarities between the two modalities begin to break down once a delay is introduced between when people feel the target object and when they perform the grasp or estimation. In particular, when grasping after a delay in a haptic paradigm, people scale their grasps differently when the target is presented with a flanking object of a different size (although the difference does not reflect a size-contrast effect). When estimating after a delay, however, it appears that people ignore the size of the flanking objects entirely. This does not fit well with the results commonly found in visual experiments. Thus, introducing a delay reveals important differences in the way in which haptic and visual memories are stored and accessed.
Human Factors Evaluation of Advanced Electric Power Grid Visualization Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.; Dauenhauer, Peter M.; Wierks, Tamara G.
This report describes initial human factors evaluation of four visualization tools (Graphical Contingency Analysis, Force Directed Graphs, Phasor State Estimator and Mode Meter/ Mode Shapes) developed by PNNL, and proposed test plans that may be implemented to evaluate their utility in scenario-based experiments.
The biodigital human: a web-based 3D platform for medical visualization and education.
Qualter, John; Sculli, Frank; Oliker, Aaron; Napier, Zachary; Lee, Sabrina; Garcia, Julio; Frenkel, Sally; Harnik, Victoria; Triola, Marc
2012-01-01
NYU School of Medicine's Division of Educational Informatics in collaboration with BioDigital Systems LLC (New York, NY) has created a virtual human body dataset that is being used for visualization, education and training and is accessible over modern web browsers.
Atoms of recognition in human and computer vision.
Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel
2016-03-08
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
Eye Contact Is Crucial for Referential Communication in Pet Dogs.
Savalli, Carine; Resende, Briseida; Gaunet, Florence
2016-01-01
Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.
Fahmy, Gamal; Black, John; Panchanathan, Sethuraman
2006-06-01
Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.
JSOU and NDIA SO/LIC Division Essays (2007)
2007-04-01
Create several content-rich Darknet environments—a private virtual network where users connect only to people they trust7—that offer e-mail, file...chat rooms, and Darknets ). Moon: Cyber-Herding Cyber-Herding Nodes and Relationship Network Gatherer Construction Demolition Structure of Cyber-Herding...the extrem- ist messages, concentrating Web sites, and developing Darknets . A visual illustration of the entire process follows Phase 7. Phase 5
Visual Purple, the Next Generation Crisis Management Decision Training Tool
2001-09-01
talents of professional Hollywood screenwriters during the scripting and writing process of the simulations. Additionally, cinematic techniques learned...cultural, and language experts for research development. Additionally, GTA provides country specific support in script writing and cinematic resources as...The result is an entirely new dimension of realism that traditional exercises often fail to capture. The scenario requires the participant to make the
Occipitoparietal alpha-band responses to the graded allocation of top-down spatial attention.
Dombrowe, Isabel; Hilgetag, Claus C
2014-09-15
The voluntary, top-down allocation of visual spatial attention has been linked to changes in the alpha-band of the electroencephalogram (EEG) signal measured over occipital and parietal lobes. In the present study, we investigated how occipitoparietal alpha-band activity changes when people allocate their attentional resources in a graded fashion across the visual field. We asked participants to either completely shift their attention into one hemifield, to balance their attention equally across the entire visual field, or to attribute more attention to one-half of the visual field than to the other. As expected, we found that alpha-band amplitudes decreased stronger contralaterally than ipsilaterally to the attended side when attention was shifted completely. Alpha-band amplitudes decreased bilaterally when attention was balanced equally across the visual field. However, when participants allocated more attentional resources to one-half of the visual field, this was not reflected in the alpha-band amplitudes, which just decreased bilaterally. We found that the performance of the participants was more strongly reflected in the coherence between frontal and occipitoparietal brain regions. We conclude that low alpha-band amplitudes seem to be necessary for stimulus detection. Furthermore, complete shifts of attention are directly reflected in the lateralization of alpha-band amplitudes. In the present study, a gradual allocation of visual attention across the visual field was only indirectly reflected in the alpha-band activity over occipital and parietal cortexes. Copyright © 2014 the American Physiological Society.
Manage "Human Capital" Strategically
ERIC Educational Resources Information Center
Odden, Allan
2011-01-01
To strategically manage human capital in education means restructuring the entire human resource system so that schools not only recruit and retain smart and capable individuals, but also manage them in ways that support the strategic directions of the organization. These management practices must be aligned with a district's education improvement…
Toward statistical modeling of saccadic eye-movement and visual saliency.
Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming
2014-11-01
In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.
Bioelectronic nose and its application to smell visualization.
Ko, Hwi Jin; Park, Tai Hyun
2016-01-01
There have been many trials to visualize smell using various techniques in order to objectively express the smell because information obtained from the sense of smell in human is very subjective. So far, well-trained experts such as a perfumer, complex and large-scale equipment such as GC-MS, and an electronic nose have played major roles in objectively detecting and recognizing odors. Recently, an optoelectronic nose was developed to achieve this purpose, but some limitations regarding the sensitivity and the number of smells that can be visualized still persist. Since the elucidation of the olfactory mechanism, numerous researches have been accomplished for the development of a sensing device by mimicking human olfactory system. Engineered olfactory cells were constructed to mimic the human olfactory system, and the use of engineered olfactory cells for smell visualization has been attempted with the use of various methods such as calcium imaging, CRE reporter assay, BRET, and membrane potential assay; however, it is not easy to consistently control the condition of cells and it is impossible to detect low odorant concentration. Recently, the bioelectronic nose was developed, and much improved along with the improvement of nano-biotechnology. The bioelectronic nose consists of the following two parts: primary transducer and secondary transducer. Biological materials as a primary transducer improved the selectivity of the sensor, and nanomaterials as a secondary transducer increased the sensitivity. Especially, the bioelectronic noses using various nanomaterials combined with human olfactory receptors or nanovesicles derived from engineered olfactory cells have a potential which can detect almost all of the smells recognized by human because an engineered olfactory cell might be able to express any human olfactory receptor as well as can mimic human olfactory system. Therefore, bioelectronic nose will be a potent tool for smell visualization, but only if two technologies are completed. First, a multi-channel array-sensing system has to be applied for the integration of all of the olfactory receptors into a single chip for mimicking the performance of human nose. Second, the processing technique of the multi-channel system signals should be simultaneously established with the conversion of the signals to visual images. With the use of this latest sensing technology, the realization of a proper smell-visualization technology is expected in the near future.
Visualization of the Eastern Renewable Generation Integration Study: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gruchalla, Kenny; Novacheck, Joshua; Bloom, Aaron
The Eastern Renewable Generation Integration Study (ERGIS), explores the operational impacts of the wide spread adoption of wind and solar photovoltaics (PV) resources in the U.S. Eastern Interconnection and Quebec Interconnection (collectively, EI). In order to understand some of the economic and reliability challenges of managing hundreds of gigawatts of wind and PV generation, we developed state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NREL's high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated withmore » evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions. state of the art tools, data, and models for simulating power system operations using hourly unit commitment and 5-minute economic dispatch over an entire year. Using NRELs high-performance computing capabilities and new methodologies to model operations, we found that the EI, as simulated with evolutionary change in 2026, could balance the variability and uncertainty of wind and PV at a 5-minute level under a variety of conditions. A large-scale display and a combination of multiple coordinated views and small multiples were used to visually analyze the four large highly multivariate scenarios with high spatial and temporal resolutions.« less
Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk
2016-01-01
To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial phase of structure-based studies. In this respect, the web server can be a valuable tool, as users are allowed to process multiple structures, quickly switch between results, and interact with generated visualizations in an intuitive manner. The SequenceCEROSENE web server is available at https://biosciences.hs-mittweida.de/seqcerosene.
The effect of early visual deprivation on the neural bases of multisensory processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2015-06-01
Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Brachtel, Elena F.; Johnson, Nicole B.; Huck, Amelia E.; Rice-Stitt, Travis L.; Vangel, Mark G.; Smith, Barbara L.; Tearney, Guillermo J.; Kang, DongKyun
2016-03-01
Unacceptably large percentage (20-40%) of breast cancer lumpectomy patients are required to undergo multiple surgeries when positive margins are found upon post-operative histologic assessment. If the margin status can be determined during surgery, surgeon can resect additional tissues to achieve tumor-free margin, which will reduce the need for additional surgeries. Spectrally encoded confocal microscopy (SECM) is a high-speed reflectance confocal microscopy technology that has a potential to image the entire surgical margin within a short procedural time. Previously, SECM was shown to rapidly image a large area (10 mm by 10 mm) of human esophageal tissue within a short procedural time (15 seconds). When used in lumpectomy, SECM will be able to image the entire margin surface of ~30 cm2 in around 7.5 minutes. SECM images will then be used to determine margin status intra-operatively. In this paper, we present results from a study of testing accuracy of SECM for diagnosing malignant breast tissues. We have imaged freshly-excised breast specimens (N=46) with SECM. SECM images clearly visualized histomorphologic features associated with normal/benign and malignant breast tissues in a similar manner to histologic images. Diagnostic accuracy was tested by comparing SECM diagnoses made by three junior pathologists with corresponding histologic diagnoses made by a senior pathologist. SECM sensitivity and specificity were high, 0.91 and 0.93, respectively. Intra-observer agreement and inter-observer agreement were also high, 0.87 and 0.84, respectively. Results from this study showed that SECM has a potential to accurately determine margin status during breast cancer lumpectomy.
Perceptual Completion in Newborn Human Infants
ERIC Educational Resources Information Center
Valenza, Eloisa; Leo, Irene; Gava, Lucia; Simion, Francesca
2006-01-01
Despite decades of studies of human infants, a still open question concerns the role of visual experience in the development of the ability to perceive complete shapes over partial occlusion. Previous studies show that newborns fail to manifest this ability, either because they lack the visual experience required for perceptual completion or…
Animate and Inanimate Objects in Human Visual Cortex: Evidence for Task-Independent Category Effects
ERIC Educational Resources Information Center
Wiggett, Alison J.; Pritchard, Iwan C.; Downing, Paul E.
2009-01-01
Evidence from neuropsychology suggests that the distinction between animate and inanimate kinds is fundamental to human cognition. Previous neuroimaging studies have reported that viewing animate objects activates ventrolateral visual brain regions, whereas inanimate objects activate ventromedial regions. However, these studies have typically…
DOT National Transportation Integrated Search
2004-03-20
A means of quantifying the cluttering effects of symbols is needed to evaluate the impact of displaying an increasing volume of information on aviation displays such as head-up displays. Human visual perception has been successfully modeled by algori...
Bio-inspired display of polarization information using selected visual cues
NASA Astrophysics Data System (ADS)
Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader
2003-12-01
For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.
Model of rhythmic ball bouncing using a visually controlled neural oscillator.
Avrin, Guillaume; Siegler, Isabelle A; Makarov, Maria; Rodriguez-Ayerbe, Pedro
2017-10-01
The present paper investigates the sensory-driven modulations of central pattern generator dynamics that can be expected to reproduce human behavior during rhythmic hybrid tasks. We propose a theoretical model of human sensorimotor behavior able to account for the observed data from the ball-bouncing task. The novel control architecture is composed of a Matsuoka neural oscillator coupled with the environment through visual sensory feedback. The architecture's ability to reproduce human-like performance during the ball-bouncing task in the presence of perturbations is quantified by comparison of simulated and recorded trials. The results suggest that human visual control of the task is achieved online. The adaptive behavior is made possible by a parametric and state control of the limit cycle emerging from the interaction of the rhythmic pattern generator, the musculoskeletal system, and the environment. NEW & NOTEWORTHY The study demonstrates that a behavioral model based on a neural oscillator controlled by visual information is able to accurately reproduce human modulations in a motor action with respect to sensory information during the rhythmic ball-bouncing task. The model attractor dynamics emerging from the interaction between the neuromusculoskeletal system and the environment met task requirements, environmental constraints, and human behavioral choices without relying on movement planning and explicit internal models of the environment. Copyright © 2017 the American Physiological Society.
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tendick, Frank; Stark, Lawrence
1989-01-01
A teleoperation simulator was constructed with vector display system, joysticks, and a simulated cylindrical manipulator, in order to quantitatively evaluate various display conditions. The first of two experiments conducted investigated the effects of perspective parameter variations on human operators' pick-and-place performance, using a monoscopic perspective display. The second experiment involved visual enhancements of the monoscopic perspective display, by adding a grid and reference lines, by comparison with visual enhancements of a stereoscopic display; results indicate that stereoscopy generally permits superior pick-and-place performance, but that monoscopy nevertheless allows equivalent performance when defined with appropriate perspective parameter values and adequate visual enhancements.
The visual white matter: The application of diffusion MRI and fiber tractography to vision science
Rokem, Ariel; Takemura, Hiromasa; Bock, Andrew S.; Scherf, K. Suzanne; Behrmann, Marlene; Wandell, Brian A.; Fine, Ione; Bridge, Holly; Pestilli, Franco
2017-01-01
Visual neuroscience has traditionally focused much of its attention on understanding the response properties of single neurons or neuronal ensembles. The visual white matter and the long-range neuronal connections it supports are fundamental in establishing such neuronal response properties and visual function. This review article provides an introduction to measurements and methods to study the human visual white matter using diffusion MRI. These methods allow us to measure the microstructural and macrostructural properties of the white matter in living human individuals; they allow us to trace long-range connections between neurons in different parts of the visual system and to measure the biophysical properties of these connections. We also review a range of findings from recent studies on connections between different visual field maps, the effects of visual impairment on the white matter, and the properties underlying networks that process visual information supporting visual face recognition. Finally, we discuss a few promising directions for future studies. These include new methods for analysis of MRI data, open datasets that are becoming available to study brain connectivity and white matter properties, and open source software for the analysis of these data. PMID:28196374
ERIC Educational Resources Information Center
Odden, Allan R.
2011-01-01
"Strategic Management of Human Capital in Education" offers a comprehensive and strategic approach to address what has become labeled as "talent and human capital." Grounded in extensive research and examples of leading edge districts, this book shows how the entire human resource system in schools--from recruitment, to selection/placement,…
Subramani, Suresh; Kalpana, Raja; Monickaraj, Pankaj Moses; Natarajan, Jeyakumar
2015-04-01
The knowledge on protein-protein interactions (PPI) and their related pathways are equally important to understand the biological functions of the living cell. Such information on human proteins is highly desirable to understand the mechanism of several diseases such as cancer, diabetes, and Alzheimer's disease. Because much of that information is buried in biomedical literature, an automated text mining system for visualizing human PPI and pathways is highly desirable. In this paper, we present HPIminer, a text mining system for visualizing human protein interactions and pathways from biomedical literature. HPIminer extracts human PPI information and PPI pairs from biomedical literature, and visualize their associated interactions, networks and pathways using two curated databases HPRD and KEGG. To our knowledge, HPIminer is the first system to build interaction networks from literature as well as curated databases. Further, the new interactions mined only from literature and not reported earlier in databases are highlighted as new. A comparative study with other similar tools shows that the resultant network is more informative and provides additional information on interacting proteins and their associated networks. Copyright © 2015 Elsevier Inc. All rights reserved.
Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.
Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh
2012-07-19
Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.
Production and perception rules underlying visual patterns: effects of symmetry and hierarchy
Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh
2012-01-01
Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636
Visualizing Human Migration Trhough Space and Time
NASA Astrophysics Data System (ADS)
Zambotti, G.; Guan, W.; Gest, J.
2015-07-01
Human migration has been an important activity in human societies since antiquity. Since 1890, approximately three percent of the world's population has lived outside of their country of origin. As globalization intensifies in the modern era, human migration persists even as governments seek to more stringently regulate flows. Understanding this phenomenon, its causes, processes and impacts often starts from measuring and visualizing its spatiotemporal patterns. This study builds a generic online platform for users to interactively visualize human migration through space and time. This entails quickly ingesting human migration data in plain text or tabular format; matching the records with pre-established geographic features such as administrative polygons; symbolizing the migration flow by circular arcs of varying color and weight based on the flow attributes; connecting the centroids of the origin and destination polygons; and allowing the user to select either an origin or a destination feature to display all flows in or out of that feature through time. The method was first developed using ArcGIS Server for world-wide cross-country migration, and later applied to visualizing domestic migration patterns within China between provinces, and between states in the United States, all through multiple years. The technical challenges of this study include simplifying the shapes of features to enhance user interaction, rendering performance and application scalability; enabling the temporal renderers to provide time-based rendering of features and the flow among them; and developing a responsive web design (RWD) application to provide an optimal viewing experience. The platform is available online for the public to use, and the methodology is easily adoptable to visualizing any flow, not only human migration but also the flow of goods, capital, disease, ideology, etc., between multiple origins and destinations across space and time.
Xu, Jingjiang; Song, Shaozhen; Wei, Wei; Wang, Ruikang K
2017-01-01
Wide-field vascular visualization in bulk tissue that is of uneven surface is challenging due to the relatively short ranging distance and significant sensitivity fall-off for most current optical coherence tomography angiography (OCTA) systems. We report a long ranging and ultra-wide-field OCTA (UW-OCTA) system based on an akinetic swept laser. The narrow instantaneous linewidth of the swept source with its high phase stability, combined with high-speed detection in the system enable us to achieve long ranging (up to 46 mm) and almost negligible system sensitivity fall-off. To illustrate these advantages, we compare the basic system performances between conventional spectral domain OCTA and UW-OCTA systems and their functional imaging of microvascular networks in living tissues. In addition, we show that the UW-OCTA is capable of different depth-ranging of cerebral blood flow within entire brain in mice, and providing unprecedented blood perfusion map of human finger in vivo . We believe that the UW-OCTA system has promises to augment the existing clinical practice and explore new biomedical applications for OCT imaging.
Xu, Jingjiang; Song, Shaozhen; Wei, Wei; Wang, Ruikang K.
2016-01-01
Wide-field vascular visualization in bulk tissue that is of uneven surface is challenging due to the relatively short ranging distance and significant sensitivity fall-off for most current optical coherence tomography angiography (OCTA) systems. We report a long ranging and ultra-wide-field OCTA (UW-OCTA) system based on an akinetic swept laser. The narrow instantaneous linewidth of the swept source with its high phase stability, combined with high-speed detection in the system enable us to achieve long ranging (up to 46 mm) and almost negligible system sensitivity fall-off. To illustrate these advantages, we compare the basic system performances between conventional spectral domain OCTA and UW-OCTA systems and their functional imaging of microvascular networks in living tissues. In addition, we show that the UW-OCTA is capable of different depth-ranging of cerebral blood flow within entire brain in mice, and providing unprecedented blood perfusion map of human finger in vivo. We believe that the UW-OCTA system has promises to augment the existing clinical practice and explore new biomedical applications for OCT imaging. PMID:28101428
Quantifying torso deformity in scoliosis
NASA Astrophysics Data System (ADS)
Ajemba, Peter O.; Kumar, Anish; Durdle, Nelson G.; Raso, V. James
2006-03-01
Scoliosis affects the alignment of the spine and the shape of the torso. Most scoliosis patients and their families are more concerned about the effect of scoliosis on the torso than its effect on the spine. There is a need to develop robust techniques for quantifying torso deformity based on full torso scans. In this paper, deformation indices obtained from orthogonal maps of full torso scans are used to quantify torso deformity in scoliosis. 'Orthogonal maps' are obtained by applying orthogonal transforms to 3D surface maps. (An 'orthogonal transform' maps a cylindrical coordinate system to a Cartesian coordinate system.) The technique was tested on 361 deformed computer models of the human torso and on 22 scans of volunteers (8 normal and 14 scoliosis). Deformation indices from the orthogonal maps correctly classified up to 95% of the volunteers with a specificity of 1.00 and a sensitivity of 0.91. In addition to classifying scoliosis, the system gives a visual representation of the entire torso in one view and is viable for use in a clinical environment for managing scoliosis.
Visualization of vacuum cleaner-induced flow in a carpet by using magnetic resonance velocimetry
NASA Astrophysics Data System (ADS)
Lee, Jeesoo; Song, Simon
2016-11-01
Understanding characteristics of in-carpet flow induced by a vacuum cleaner nozzle is important to improve the design and performance of the cleaner nozzle. However, optical visualization techniques like PIV are limited to uncover the flow details because a carpet is opaque porous media. We have visualized a mean flow field in a cut-pile type carpet by magnetic resonance velocimetry. The flow was generated by a static vacuum cleaner nozzle, and the working fluid is a copper sulfate aqueous solution. Three dimensional, three component velocity vectors were obtained in a measurement domain of 336 x 128 x 14 mm3 covering the entire nozzle span and a 7-mm thick carpet below the nozzle. The voxel size was 1 x 1 x 0.5 (depthwise) mm3. Based on the visualization data, the permeability, the Forchheimer coefficient and pressure distribution were calculated for the carpet. This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (No. 2016R1A2B3009541).
TMS of the occipital cortex induces tactile sensations in the fingers of blind Braille readers.
Ptito, M; Fumal, A; de Noordhout, A Martens; Schoenen, J; Gjedde, A; Kupers, R
2008-01-01
Various non-visual inputs produce cross-modal responses in the visual cortex of early blind subjects. In order to determine the qualitative experience associated with these occipital activations, we systematically stimulated the entire occipital cortex using single pulse transcranial magnetic stimulation (TMS) in early blind subjects and in blindfolded seeing controls. Whereas blindfolded seeing controls reported only phosphenes following occipital cortex stimulation, some of the blind subjects reported tactile sensations in the fingers that were somatotopically organized onto the visual cortex. The number of cortical sites inducing tactile sensations appeared to be related to the number of hours of Braille reading per day, Braille reading speed and dexterity. These data, taken in conjunction with previous anatomical, behavioural and functional imaging results, suggest the presence of a polysynaptic cortical pathway between the somatosensory cortex and the visual cortex in early blind subjects. These results also add new evidence that the activity of the occipital lobe in the blind takes its qualitative expression from the character of its new input source, therefore supporting the cortical deference hypothesis.
Children of War and Peace: A Human Development Perspective
ERIC Educational Resources Information Center
Sagi-Schwartz, Abraham
2012-01-01
Political conflicts and intractable wars can be conceived as disasters of human activities and they affect the entire life of children and their families. An ecological-transactional perspective of human development is adopted in order to identify multilevel developmental and contextual trajectories that might facilitate or impede the willingness…
A computational model of spatial visualization capacity.
Lyon, Don R; Gunzelmann, Glenn; Gluck, Kevin A
2008-09-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to perform it. In this model, developed within the Adaptive Control of Thought-Rational (ACT-R) architecture, visualization capacity is limited by three mechanisms. Two of these (associative interference and decay) are longstanding characteristics of ACT-R's declarative memory. A third (spatial interference) is a new mechanism motivated by spatial proximity effects in our data. We tested the model in two experiments, one with parameter-value fitting, and a replication without further fitting. Correspondence between model and data was close in both experiments, suggesting that the model may be useful for understanding why visualizing new, complex spatial material is so difficult.
Visual analytics for aviation safety: A collaborative approach to sensemaking
NASA Astrophysics Data System (ADS)
Wade, Andrew
Visual analytics, the "science of analytical reasoning facilitated by interactive visual interfaces", is more than just visualization. Understanding the human reasoning process is essential for designing effective visualization tools and providing correct analyses. This thesis describes the evolution, application and evaluation of a new method for studying analytical reasoning that we have labeled paired analysis. Paired analysis combines subject matter experts (SMEs) and tool experts (TE) in an analytic dyad, here used to investigate aircraft maintenance and safety data. The method was developed and evaluated using interviews, pilot studies and analytic sessions during an internship at the Boeing Company. By enabling a collaborative approach to sensemaking that can be captured by researchers, paired analysis yielded rich data on human analytical reasoning that can be used to support analytic tool development and analyst training. Keywords: visual analytics, paired analysis, sensemaking, boeing, collaborative analysis.
The human urothelium consists of multiple clonal units, each maintained by a stem cell.
Gaisa, Nadine T; Graham, Trevor A; McDonald, Stuart A C; Cañadillas-Lopez, Sagrario; Poulsom, Richard; Heidenreich, Axel; Jakse, Gerhard; Tadrous, Paul J; Knuechel, Ruth; Wright, Nicholas A
2011-10-01
Little is known about the clonal architecture of human urothelium. It is likely that urothelial stem cells reside within the basal epithelial layer, yet lineage tracing from a single stem cell as a means to show the presence of a urothelial stem cell has never been performed. Here, we identify clonally related cell areas within human bladder mucosa in order to visualize epithelial fields maintained by a single founder/stem cell. Sixteen frozen cystectomy specimens were serially sectioned. Patches of cells deficient for the mitochondrially encoded enzyme cytochrome c oxidase (CCO) were identified using dual-colour enzyme histochemistry. To show that these patches represent clonal proliferations, small CCO-proficient and -deficient areas were individually laser-capture microdissected and the entire mitochondrial genome (mtDNA) in each area was PCR amplified and sequenced to identify mtDNA mutations. Immunohistochemistry was performed for the different cell layers of the urothelium and adjacent mesenchyme. CCO-deficient patches could be observed in normal urothelium of all cystectomy specimens. The two-dimensional length of these negative patches varied from 2-3 cells (about 30 µm) to 4.7 mm. Each cell area within a CCO-deficient patch contained an identical somatic mtDNA mutation, indicating that the patch was a clonal unit. Patches contained all the mature cell differentiation stages present in the urothelium, suggesting the presence of a stem cell. Our results demonstrate that the normal mucosa of human bladder contains stem cell-derived clonal units that actively replenish the urothelium during ageing. The size of the clonal unit attributable to each stem cell was broadly distributed, suggesting replacement of one stem cell clone by another. Copyright © 2011 Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Paloyelis, Yannis; Doyle, Orla M; Zelaya, Fernando O; Maltezos, Stefanos; Williams, Steven C; Fotopoulou, Aikaterini; Howard, Matthew A
2016-04-15
Animal and human studies highlight the role of oxytocin in social cognition and behavior and the potential of intranasal oxytocin (IN-OT) to treat social impairment in individuals with neuropsychiatric disorders such as autism. However, extensive efforts to evaluate the central actions and therapeutic efficacy of IN-OT may be marred by the absence of data regarding its temporal dynamics and sites of action in the living human brain. In a placebo-controlled study, we used arterial spin labeling to measure IN-OT-induced changes in resting regional cerebral blood flow (rCBF) in 32 healthy men. Volunteers were blinded regarding the nature of the compound they received. The rCBF data were acquired 15 min before and up to 78 min after onset of treatment onset (40 IU of IN-OT or placebo). The data were analyzed using mass univariate and multivariate pattern recognition techniques. We obtained robust evidence delineating an oxytocinergic network comprising regions expected to express oxytocin receptors, based on histologic evidence, and including core regions of the brain circuitry underpinning social cognition and emotion processing. Pattern recognition on rCBF maps indicated that IN-OT-induced changes were sustained over the entire posttreatment observation interval (25-78 min) and consistent with a pharmacodynamic profile showing a peak response at 39-51 min. Our study provides the first visualization and quantification of IN-OT-induced changes in rCBF in the living human brain unaffected by cognitive, affective, or social manipulations. Our findings can inform theoretical and mechanistic models regarding IN-OT effects on typical and atypical social behavior and guide future experiments (e.g., regarding the timing of experimental manipulations). Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Liu, Haisong; Yang, Huan; Zhu, Dicong; Sui, Xin; Li, Juan; Liang, Zhen; Xu, Lei; Chen, Zeyu; Yao, Anzhi; Zhang, Long; Zhang, Xi; Yi, Xing; Liu, Meng; Xu, Shiqing; Zhang, Wenjian; Lin, Hua; Xie, Lan; Lou, Jinning; Zhang, Yong; Xi, Jianzhong; Deng, Hongkui
2014-10-01
The applications of human pluripotent stem cell (hPSC)-derived cells in regenerative medicine has encountered a long-standing challenge: how can we efficiently obtain mature cell types from hPSCs? Attempts to address this problem are hindered by the complexity of controlling cell fate commitment and the lack of sufficient developmental knowledge for guiding hPSC differentiation. Here, we developed a systematic strategy to study hPSC differentiation by labeling sequential developmental genes to encompass the major developmental stages, using the directed differentiation of pancreatic β cells from hPSCs as a model. We therefore generated a large panel of pancreas-specific mono- and dual-reporter cell lines. With this unique platform, we visualized the kinetics of the entire differentiation process in real time for the first time by monitoring the expression dynamics of the reporter genes, identified desired cell populations at each differentiation stage and demonstrated the ability to isolate these cell populations for further characterization. We further revealed the expression profiles of isolated NGN3-eGFP(+) cells by RNA sequencing and identified sushi domain-containing 2 (SUSD2) as a novel surface protein that enriches for pancreatic endocrine progenitors and early endocrine cells both in human embryonic stem cells (hESC)-derived pancreatic cells and in the developing human pancreas. Moreover, we captured a series of cell fate transition events in real time, identified multiple cell subpopulations and unveiled their distinct gene expression profiles, among heterogeneous progenitors for the first time using our dual reporter hESC lines. The exploration of this platform and our new findings will pave the way to obtain mature β cells in vitro.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey R.
Real-world systems change continuously and across domains like traffic monitoring, cyber security, etc., such changes occur within short time scales. This leads to a streaming data problem and produces unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. In this paper, our goal is to study how the state-of-the-art in streaming data visualization handles these challenges and reflect on the gaps and opportunities. To this end, we have three contributions: i) problem characterization for identifying domain-specific goals and challenges for handling streaming data, ii) a survey andmore » analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space, and iii) reflections on the perceptually motivated design challenges and potential research directions for addressing them.« less
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
Aversive learning shapes neuronal orientation tuning in human visual cortex.
McTeague, Lisa M; Gruss, L Forest; Keil, Andreas
2015-07-28
The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.
Visual Working Memory Capacity and Proactive Interference
Hartshorne, Joshua K.
2008-01-01
Background Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Methodology/Principal Findings Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. Conclusions/Significance This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals. PMID:18648493
Visual working memory capacity and proactive interference.
Hartshorne, Joshua K
2008-07-23
Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals.
The Application of Current User Interface Technology to Interactive Wargaming Systems.
1987-09-01
components is essential to the Macintosh interface. Apple states that "Consistent visual communication is very powerful in delivering complex messages...interface. A visual interface uses visual objects as the basis of communication. "A visual communication object is some combination S. of text and...graphics used for communication under a system of inter- pretation, or visual language." The benefit of visual communication is V 45 "When humans are faced