ERIC Educational Resources Information Center
Cunnington, Marisol; Kantrowitz, Andrea; Harnett, Susanne; Hill-Ries, Aline
2014-01-01
The "Framing Student Success: Connecting Rigorous Visual Arts, Math and Literacy Learning" experimental demonstration project was designed to develop and test an instructional program integrating high-quality, standards-based instruction in the visual arts, math, and literacy. Developed and implemented by arts-in-education organization…
Nissi, Mikko J; Toth, Ferenc; Zhang, Jinjin; Schmitter, Sebastian; Benson, Michael; Carlson, Cathy S; Ellermann, Jutta M
2014-06-01
High-resolution visualization of cartilage canals has been restricted to histological methods and contrast-enhanced imaging. In this study, the feasibility of non-contrast-enhanced susceptibility weighted imaging (SWI) for visualization of the cartilage canals was investigated ex vivo at 9.4 T, further explored at 7 and 3 T and demonstrated in vivo at 7 T, using a porcine animal model. SWI scans of specimens of distal femur and humerus from 1 to 8 week-old piglets were conducted at 9.4 T using 3D-GRE sequence and SWI post-processing. The stifle joints of a 2-week old piglet were scanned ex vivo at 7 and 3 T. Finally, the same sites of a 3-week-old piglet were scanned, in vivo, at 7 T under general anesthesia using the vendor-provided sequences. High-contrast visualization of the cartilage canals was obtained ex vivo, especially at higher field strengths; the results were confirmed histologically. In vivo feasibility was demonstrated at 7 T and comparison of ex vivo scans at 3 and 7 T indicated feasibility of using SWI at 3 T. High-resolution 3D visualization of cartilage canals was demonstrated using SWI. This demonstration of fully noninvasive visualization opens new avenues to explore skeletal maturation and the role of vascular supply for diseases such as osteochondrosis. Copyright © 2013 Wiley Periodicals, Inc.
Visual acuity in young elite motorsport athletes: a preliminary report.
Schneiders, Anthony G; Sullivan, S John; Rathbone, Emma J; Louise Thayer, A; Wallis, Laura M; Wilson, Alexandra E
2010-05-01
To determine whether elite motorsport athletes demonstrate superior levels of Visual Acuity than age and sex-matched controls. A cross-sectional observational study. A University vision and balance laboratory. Young male motorsport athletes from the New Zealand Elite Motorsport Academy and healthy age and sex-matched controls. Vision performance tests comprising; Static Visual Acuity (SVA), Dynamic Visual Acuity (DVA), Gaze Stabilization Test (GST), and the Perception Time Test (PTT). Motorsport athletes demonstrated superior visual acuity compared to age and sex-matched controls for all measures, and while this was not statistically significant for SVA, GST and DVA, it reached statistical significance for the PTT (p
Communicating Science Concepts to Individuals with Visual Impairments Using Short Learning Modules
ERIC Educational Resources Information Center
Stender, Anthony S.; Newell, Ryan; Villarreal, Eduardo; Swearer, Dayne F.; Bianco, Elisabeth; Ringe, Emilie
2016-01-01
Of the 6.7 million individuals in the United States who are visually impaired, 63% are unemployed, and 59% have not attained an education beyond a high school diploma. Providing a basic science education to children and adults with visual disabilities can be challenging because most scientific learning relies on visual demonstrations. Creating…
Novel Visualization of Large Health Related Data Sets
2014-03-01
demonstration of the visualization techniques and results from our earliest visualization, which used counts of the various data elements queried using...locations (e.g. areas with high pollen that increases the need for more intensive health care for people with asthma) and save millions of dollars
Leigh, J.; Renambot, L.; Johnson, Aaron H.; Jeong, B.; Jagodic, R.; Schwarz, N.; Svistula, D.; Singh, R.; Aguilera, J.; Wang, X.; Vishwanath, V.; Lopez, B.; Sandin, D.; Peterka, T.; Girado, J.; Kooima, R.; Ge, J.; Long, L.; Verlo, A.; DeFanti, T.A.; Brown, M.; Cox, D.; Patterson, R.; Dorn, P.; Wefel, P.; Levy, S.; Talandis, J.; Reitzer, J.; Prudhomme, T.; Coffin, T.; Davis, B.; Wielinga, P.; Stolk, B.; Bum, Koo G.; Kim, J.; Han, S.; Corrie, B.; Zimmerman, T.; Boulanger, P.; Garcia, M.
2006-01-01
The research outlined in this paper marks an initial global cooperative effort between visualization and collaboration researchers to build a persistent virtual visualization facility linked by ultra-high-speed optical networks. The goal is to enable the comprehensive and synergistic research and development of the necessary hardware, software and interaction techniques to realize the next generation of end-user tools for scientists to collaborate on the global Lambda Grid. This paper outlines some of the visualization research projects that were demonstrated at the iGrid 2005 workshop in San Diego, California.
Familiarity enhances visual working memory for faces.
Jackson, Margaret C; Raymond, Jane E
2008-06-01
Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or inverted and a low- or high-load concurrent verbal WM task was administered to suppress contribution from verbal WM. Even with a high verbal memory load, visual WM performance was significantly better and capacity estimated as significantly greater for famous versus unfamiliar faces. Face inversion abolished this effect. Thus, neither strategic, explicit support from verbal WM nor low-level feature processing easily accounts for the observed benefit of high familiarity for visual WM. These results demonstrate that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory.
'Visual’ parsing can be taught quickly without visual experience during critical periods
Reich, Lior; Amedi, Amir
2015-01-01
Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually, and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood, and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches. PMID:26482105
Rarefied flow diagnostics using pulsed high-current electron beams
NASA Technical Reports Server (NTRS)
Wojcik, Radoslaw M.; Schilling, John H.; Erwin, Daniel A.
1990-01-01
The use of high-current short-pulse electron beams in low-density gas flow diagnostics is introduced. Efficient beam propagation is demonstrated for pressure up to 300 microns. The beams, generated by low-pressure pseudospark discharges in helium, provide extremely high fluorescence levels, allowing time-resolved visualization in high-background environments. The fluorescence signal frequency is species-dependent, allowing instantaneous visualization of mixing flowfields.
Voytek, Bradley; Canolty, Ryan T.; Shestyuk, Avgusta; Crone, Nathan E.; Parvizi, Josef; Knight, Robert T.
2010-01-01
The phase of ongoing theta (4–8 Hz) and alpha (8–12 Hz) electrophysiological oscillations is coupled to high gamma (80–150 Hz) amplitude, which suggests that low-frequency oscillations modulate local cortical activity. While this phase–amplitude coupling (PAC) has been demonstrated in a variety of tasks and cortical regions, it has not been shown whether task demands differentially affect the regional distribution of the preferred low-frequency coupling to high gamma. To address this issue we investigated multiple-rhythm theta/alpha to high gamma PAC in two subjects with implanted subdural electrocorticographic grids. We show that high gamma amplitude couples to the theta and alpha troughs and demonstrate that, during visual tasks, alpha/high gamma coupling preferentially increases in visual cortical regions. These results suggest that low-frequency phase to high-frequency amplitude coupling is modulated by behavioral task and may reflect a mechanism for selection between communicating neuronal networks. PMID:21060716
Kahn, Itamar; Wig, Gagan S.; Schacter, Daniel L.
2012-01-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes. PMID:21968568
Stevens, W Dale; Kahn, Itamar; Wig, Gagan S; Schacter, Daniel L
2012-08-01
Asymmetrical specialization of cognitive processes across the cerebral hemispheres is a hallmark of healthy brain development and an important evolutionary trait underlying higher cognition in humans. While previous research, including studies of priming, divided visual field presentation, and split-brain patients, demonstrates a general pattern of right/left asymmetry of form-specific versus form-abstract visual processing, little is known about brain organization underlying this dissociation. Here, using repetition priming of complex visual scenes and high-resolution functional magnetic resonance imaging (MRI), we demonstrate asymmetrical form specificity of visual processing between the right and left hemispheres within a region known to be critical for processing of visual spatial scenes (parahippocampal place area [PPA]). Next, we use resting-state functional connectivity MRI analyses to demonstrate that this functional asymmetry is associated with differential intrinsic activity correlations of the right versus left PPA with regions critically involved in perceptual versus conceptual processing, respectively. Our results demonstrate that the PPA comprises lateralized subregions across the cerebral hemispheres that are engaged in functionally dissociable yet complementary components of visual scene analysis. Furthermore, this functional asymmetry is associated with differential intrinsic functional connectivity of the PPA with distinct brain areas known to mediate dissociable cognitive processes.
High-frequency neural oscillations and visual processing deficits in schizophrenia
Tan, Heng-Ru May; Lana, Luiz; Uhlhaas, Peter J.
2013-01-01
Visual information is fundamental to how we understand our environment, make predictions, and interact with others. Recent research has underscored the importance of visuo-perceptual dysfunctions for cognitive deficits and pathophysiological processes in schizophrenia. In the current paper, we review evidence for the relevance of high frequency (beta/gamma) oscillations towards visuo-perceptual dysfunctions in schizophrenia. In the first part of the paper, we examine the relationship between beta/gamma band oscillations and visual processing during normal brain functioning. We then summarize EEG/MEG-studies which demonstrate reduced amplitude and synchrony of high-frequency activity during visual stimulation in schizophrenia. In the final part of the paper, we identify neurobiological correlates as well as offer perspectives for future research to stimulate further inquiry into the role of high-frequency oscillations in visual processing impairments in the disorder. PMID:24130535
The Effect of Visual Perceptual Load on Auditory Awareness in Autism Spectrum Disorder
ERIC Educational Resources Information Center
Tillmann, Julian; Olguin, Andrea; Tuomainen, Jyrki; Swettenham, John
2015-01-01
Recent work on visual selective attention has shown that individuals with Autism Spectrum Disorder (ASD) demonstrate an increased perceptual capacity. The current study examined whether increasing visual perceptual load also has less of an effect on auditory awareness in children with ASD. Participants performed either a high- or low load version…
BioMon: A Google Earth Based Continuous Biomass Monitoring System (Demo Paper)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vatsavai, Raju
2009-01-01
We demonstrate a Google Earth based novel visualization system for continuous monitoring of biomass at regional and global scales. This system is integrated with a back-end spatiotemporal data mining system that continuously detects changes using high temporal resolution MODIS images. In addition to the visualization, we demonstrate novel query features of the system that provides insights into the current conditions of the landscape.
DOT National Transportation Integrated Search
2015-02-01
Utilizing enhanced visualization in transportation planning and design gained popularity in the last decade. This work aimed at : demonstrating the concept of utilizing a highly immersive, virtual reality simulation engine for creating dynamic, inter...
Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A
2013-11-01
Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.
Photovoltaic restoration of sight with high visual acuity
Lorach, Henri; Goetz, Georges; Smith, Richard; Lei, Xin; Mandel, Yossi; Kamins, Theodore; Mathieson, Keith; Huie, Philip; Harris, James; Sher, Alexander; Palanker, Daniel
2015-01-01
Patients with retinal degeneration lose sight due to gradual demise of photoreceptors. Electrical stimulation of the surviving retinal neurons provides an alternative route for delivery of visual information. We demonstrate that subretinal arrays with 70 μm photovoltaic pixels provide highly localized stimulation, with electrical and visual receptive fields of comparable sizes in rat retinal ganglion cells. Similarly to normal vision, retinal response to prosthetic stimulation exhibits flicker fusion at high frequencies, adaptation to static images and non-linear spatial summation. In rats with retinal degeneration, these photovoltaic arrays provide spatial resolution of 64 ± 11 μm, corresponding to half of the normal visual acuity in pigmented rats. Ease of implantation of these wireless and modular arrays, combined with their high resolution opens the door to functional restoration of sight. PMID:25915832
Murray, Nicholas P; Hunfalvay, Melissa
2017-02-01
Considerable research has documented that successful performance in interceptive tasks (such as return of serve in tennis) is based on the performers' capability to capture appropriate anticipatory information prior to the flight path of the approaching object. Athletes of higher skill tend to fixate on different locations in the playing environment prior to initiation of a skill than their lesser skilled counterparts. The purpose of this study was to examine visual search behaviour strategies of elite (world ranked) tennis players and non-ranked competitive tennis players (n = 43) utilising cluster analysis. The results of hierarchical (Ward's method) and nonhierarchical (k means) cluster analyses revealed three different clusters. The clustering method distinguished visual behaviour of high, middle-and low-ranked players. Specifically, high-ranked players demonstrated longer mean fixation duration and lower variation of visual search than middle-and low-ranked players. In conclusion, the results demonstrated that cluster analysis is a useful tool for detecting and analysing the areas of interest for use in experimental analysis of expertise and to distinguish visual search variables among participants'.
Multimode intravascular RF coil for MRI-guided interventions.
Kurpad, Krishna N; Unal, Orhan
2011-04-01
To demonstrate the feasibility of using a single intravascular radiofrequency (RF) probe connected to the external magnetic resonance imaging (MRI) system via a single coaxial cable to perform active tip tracking and catheter visualization and high signal-to-noise ratio (SNR) intravascular imaging. A multimode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. The multimode coil behaves as an inductively coupled transmit coil. The forward-looking capability of 6 mm was measured. A greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil was demonstrated. Simultaneous active tip tracking and catheter visualization was demonstrated. It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multimode intravascular RF coil that is connected to the external system via a single coaxial cable. Copyright © 2011 Wiley-Liss, Inc.
Multi-mode Intravascular RF Coil for MRI-guided Interventions
Kurpad, Krishna N.; Unal, Orhan
2011-01-01
Purpose To demonstrate the feasibility of using a single intravascular RF probe connected to the external MRI system via a single coaxial cable to perform active tip tracking and catheter visualization, and high SNR intravascular imaging. Materials and Methods A multi-mode intravascular RF coil was constructed on a 6F balloon catheter and interfaced to a 1.5T MRI scanner via a decoupling circuit. Bench measurements of coil impedances were followed by imaging experiments in saline and phantoms. Results The multi-mode coil behaves as an inductively-coupled transmit coil. Forward looking capability of 6mm is measured. Greater than 3-fold increase in SNR compared to conventional imaging using optimized external coil is demonstrated. Simultaneous active tip tracking and catheter visualization is demonstrated. Conclusions It is feasible to perform 1) active tip tracking, 2) catheter visualization, and 3) high SNR imaging using a single multi-mode intravascular RF coil that is connected to the external system via a single coaxial cable. PMID:21448969
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta
Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less
ERIC Educational Resources Information Center
Gilbert, George L., Ed.
1986-01-01
Describes two demonstrations designed to help chemistry students visualize certain chemical properties. One experiment uses balloons to illustrate the behavior of gases under varying temperatures and pressures. The other uses a makeshift pea shooter and a commercial model to demonstrate atomic structure and the behavior of high-speed particles.…
Hoffmann, M B; Kaule, F; Grzeschik, R; Behrens-Baumann, W; Wolynski, B
2011-07-01
Since its initial introduction in the mid-1990 s, retinotopic mapping of the human visual cortex, based on functional magnetic resonance imaging (fMRI), has contributed greatly to our understanding of the human visual system. Multiple cortical visual field representations have been demonstrated and thus numerous visual areas identified. The organisation of specific areas has been detailed and the impact of pathophysiologies of the visual system on the cortical organisation uncovered. These results are based on investigations at a magnetic field strength of 3 Tesla or less. In a field-strength comparison between 3 and 7 Tesla, it was demonstrated that retinotopic mapping benefits from a magnetic field strength of 7 Tesla. Specifically, the visual areas can be mapped with high spatial resolution for a detailed analysis of the visual field maps. Applications of fMRI-based retinotopic mapping in ophthalmological research hold promise to further our understanding of plasticity in the human visual cortex. This is highlighted by pioneering studies in patients with macular dysfunction or misrouted optic nerves. © Georg Thieme Verlag KG Stuttgart · New York.
Visual Learning Alters the Spontaneous Activity of the Resting Human Brain: An fNIRS Study
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning. PMID:25243168
Visual learning alters the spontaneous activity of the resting human brain: an fNIRS study.
Niu, Haijing; Li, Hao; Sun, Li; Su, Yongming; Huang, Jing; Song, Yan
2014-01-01
Resting-state functional connectivity (RSFC) has been widely used to investigate spontaneous brain activity that exhibits correlated fluctuations. RSFC has been found to be changed along the developmental course and after learning. Here, we investigated whether and how visual learning modified the resting oxygenated hemoglobin (HbO) functional brain connectivity by using functional near-infrared spectroscopy (fNIRS). We demonstrate that after five days of training on an orientation discrimination task constrained to the right visual field, resting HbO functional connectivity and directed mutual interaction between high-level visual cortex and frontal/central areas involved in the top-down control were significantly modified. Moreover, these changes, which correlated with the degree of perceptual learning, were not limited to the trained left visual cortex. We conclude that the resting oxygenated hemoglobin functional connectivity could be used as a predictor of visual learning, supporting the involvement of high-level visual cortex and the involvement of frontal/central cortex during visual perceptual learning.
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-01-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-09-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.
Jeste, Shafali S; Kirkham, Natasha; Senturk, Damla; Hasenstab, Kyle; Sugar, Catherine; Kupelian, Chloe; Baker, Elizabeth; Sanders, Andrew J; Shimizu, Christina; Norona, Amanda; Paparella, Tanya; Freeman, Stephanny F N; Johnson, Scott P
2015-01-01
Statistical learning is characterized by detection of regularities in one's environment without an awareness or intention to learn, and it may play a critical role in language and social behavior. Accordingly, in this study we investigated the electrophysiological correlates of visual statistical learning in young children with autism spectrum disorder (ASD) using an event-related potential shape learning paradigm, and we examined the relation between visual statistical learning and cognitive function. Compared to typically developing (TD) controls, the ASD group as a whole showed reduced evidence of learning as defined by N1 (early visual discrimination) and P300 (attention to novelty) components. Upon further analysis, in the ASD group there was a positive correlation between N1 amplitude difference and non-verbal IQ, and a positive correlation between P300 amplitude difference and adaptive social function. Children with ASD and a high non-verbal IQ and high adaptive social function demonstrated a distinctive pattern of learning. This is the first study to identify electrophysiological markers of visual statistical learning in children with ASD. Through this work we have demonstrated heterogeneity in statistical learning in ASD that maps onto non-verbal cognition and adaptive social function. © 2014 John Wiley & Sons Ltd.
Peterson, Elena S; McCue, Lee Ann; Schrimpe-Rutledge, Alexandra C; Jensen, Jeffrey L; Walker, Hyunjoo; Kobold, Markus A; Webb, Samantha R; Payne, Samuel H; Ansong, Charles; Adkins, Joshua N; Cannon, William R; Webb-Robertson, Bobbie-Jo M
2012-04-05
The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php.
2012-01-01
Background The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. Results VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. Conclusions VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php. PMID:22480257
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076
Campbell, Julia; Sharma, Anu
2016-01-01
Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5-7, 8-10, and 11-15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance.
Campbell, Julia; Sharma, Anu
2016-01-01
Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5–7, 8–10, and 11–15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Human microbiome visualization using 3D technology.
Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C
2011-01-01
High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.
Woodhead, Zoe Victoria Joan; Wise, Richard James Surtees; Sereno, Marty; Leech, Robert
2011-10-01
Different cortical regions within the ventral occipitotemporal junction have been reported to show preferential responses to particular objects. Thus, it is argued that there is evidence for a left-lateralized visual word form area and a right-lateralized fusiform face area, but the unique specialization of these areas remains controversial. Words are characterized by greater power in the high spatial frequency (SF) range, whereas faces comprise a broader range of high and low frequencies. We investigated how these high-order visual association areas respond to simple sine-wave gratings that varied in SF. Using functional magnetic resonance imaging, we demonstrated lateralization of activity that was concordant with the low-level visual property of words and faces; left occipitotemporal cortex is more strongly activated by high than by low SF gratings, whereas the right occipitotemporal cortex responded more to low than high spatial frequencies. Therefore, the SF of a visual stimulus may bias the lateralization of processing irrespective of its higher order properties.
Value-driven attentional capture in the auditory domain.
Anderson, Brian A
2016-01-01
It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.
van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark
2010-07-01
This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.
Quality metrics in high-dimensional data visualization: an overview and systematization.
Bertini, Enrico; Tatu, Andrada; Keim, Daniel
2011-12-01
In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE
Matsuzaki, Naoyuki; Schwarzlose, Rebecca F.; Nishida, Masaaki; Ofen, Noa; Asano, Eishi
2015-01-01
Behavioral studies demonstrate that a face presented in the upright orientation attracts attention more rapidly than an inverted face. Saccades toward an upright face take place in 100-140 ms following presentation. The present study using electrocorticography determined whether upright face-preferential neural activation, as reflected by augmentation of high-gamma activity at 80-150 Hz, involved the lower-order visual cortex within the first 100 ms post-stimulus presentation. Sampled lower-order visual areas were verified by the induction of phosphenes upon electrical stimulation. These areas resided in the lateral-occipital, lingual, and cuneus gyri along the calcarine sulcus, roughly corresponding to V1 and V2. Measurement of high-gamma augmentation during central (circular) and peripheral (annular) checkerboard reversal pattern stimulation indicated that central-field stimuli were processed by the more polar surface whereas peripheral-field stimuli by the more anterior medial surface. Upright face stimuli, compared to inverted ones, elicited up to 23% larger augmentation of high-gamma activity in the lower-order visual regions at 40-90 ms. Upright face-preferential high-gamma augmentation was more highly correlated with high-gamma augmentation for central than peripheral stimuli. Our observations are consistent with the hypothesis that lower-order visual regions, especially those for the central field, are involved in visual cues for rapid detection of upright face stimuli. PMID:25579446
High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search
ERIC Educational Resources Information Center
Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.
2010-01-01
Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…
Nonretinotopic visual processing in the brain.
Melcher, David; Morrone, Maria Concetta
2015-01-01
A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.
Li, Chenhui; Baciu, George; Han, Yu
2018-03-01
Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.
Amblyopia in Astigmatic Children: Patterns of Deficits
Harvey, Erin M.; Dobson, Velma; Miller, Joseph M.; Clifford-Donaldson, Candice E.
2007-01-01
Neural changes that result from disruption of normal visual experience during development are termed amblyopia. To characterize visual deficits specific to astigmatism-related amblyopia, we compared best-corrected visual performance in 330 astigmatic and 475 non-astigmatic kindergarten through 6th grade children. Astigmatism was associated with deficits in letter, grating and vernier acuity, high and middle spatial frequency contrast sensitivity, and stereoacuity. Although grating acuity, vernier acuity, and contrast sensitivity were reduced across stimulus orientation, astigmats demonstrated orientation-dependent deficits (meridional amblyopia) only for grating acuity. Astigmatic children are at risk for deficits across a range of visual functions. PMID:17184807
NASA Astrophysics Data System (ADS)
Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei
2017-07-01
This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.
Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century
NASA Astrophysics Data System (ADS)
Bartsch, H.; Erlebacher, G.
2003-12-01
amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.
Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming
2018-02-28
The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Harvey, Ben M; Dumoulin, Serge O
2016-02-15
Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.
Functional neuronal processing of body odors differs from that of similar common odors.
Lundström, Johan N; Boyle, Julie A; Zatorre, Robert J; Jones-Gotman, Marilyn
2008-06-01
Visual and auditory stimuli of high social and ecological importance are processed in the brain by specialized neuronal networks. To date, this has not been demonstrated for olfactory stimuli. By means of positron emission tomography, we sought to elucidate the neuronal substrates behind body odor perception to answer the question of whether the central processing of body odors differs from perceptually similar nonbody odors. Body odors were processed by a network that was distinctly separate from common odors, indicating a separation in the processing of odors based on their source. Smelling a friend's body odor activated regions previously seen for familiar stimuli, whereas smelling a stranger activated amygdala and insular regions akin to what has previously been demonstrated for fearful stimuli. The results provide evidence that social olfactory stimuli of high ecological relevance are processed by specialized neuronal networks similar to what has previously been demonstrated for auditory and visual stimuli.
The rapid terrain visualization interferometric synthetic aperture radar sensor
NASA Astrophysics Data System (ADS)
Graham, Robert H.; Bickel, Douglas L.; Hensley, William H.
2003-11-01
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
Correction of Refractive Errors in Rhesus Macaques (Macaca mulatta) Involved in Visual Research
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-01-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals. PMID:25427343
Handwriting generates variable visual output to facilitate symbol learning.
Li, Julia X; James, Karin H
2016-03-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Handwriting generates variable visual input to facilitate symbol learning
Li, Julia X.; James, Karin H.
2015-01-01
Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913
Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-08-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.
Brain processing of visual information during fast eye movements maintains motor performance.
Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis
2013-01-01
Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.
Oscillatory encoding of visual stimulus familiarity.
Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A
2018-06-18
Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.
A Summary of Proceedings for the Advanced Deployable Day/Night Simulation Symposium
2009-07-01
initiated to design , develop, and deliver transportable visual simulations that jointly provide night-vision and high-resolution daylight capability. The...Deployable Day/Night Simulation (ADDNS) Technology Demonstration Project was initiated to design , develop, and deliver transportable visual...was Dr. Richard Wildes (York University); Mr. Vitaly Zholudev (Department of Computer Science, York University), Mr. X. Zhu (Neptec Design Group), and
An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.
Magen, Hagit
2017-03-01
Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.
Crewther, David P; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A; Crewther, Sheila G
2015-12-01
Saccadic suppression-the reduction of visual sensitivity during rapid eye movements-has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35-45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80-160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement.
Xie, Jun; Xu, Guanghua; Luo, Ailing; Li, Min; Zhang, Sicong; Han, Chengcheng; Yan, Wenqiang
2017-08-14
As a spatial selective attention-based brain-computer interface (BCI) paradigm, steady-state visual evoked potential (SSVEP) BCI has the advantages of high information transfer rate, high tolerance to artifacts, and robust performance across users. However, its benefits come at the cost of mental load and fatigue occurring in the concentration on the visual stimuli. Noise, as a ubiquitous random perturbation with the power of randomness, may be exploited by the human visual system to enhance higher-level brain functions. In this study, a novel steady-state motion visual evoked potential (SSMVEP, i.e., one kind of SSVEP)-based BCI paradigm with spatiotemporal visual noise was used to investigate the influence of noise on the compensation of mental load and fatigue deterioration during prolonged attention tasks. Changes in α , θ , θ + α powers, θ / α ratio, and electroencephalography (EEG) properties of amplitude, signal-to-noise ratio (SNR), and online accuracy, were used to evaluate mental load and fatigue. We showed that presenting a moderate visual noise to participants could reliably alleviate the mental load and fatigue during online operation of visual BCI that places demands on the attentional processes. This demonstrated that noise could provide a superior solution to the implementation of visual attention controlling-based BCI applications.
Inattentional Deafness: Visual Load Leads to Time-Specific Suppression of Auditory Evoked Responses
Molloy, Katharine; Griffiths, Timothy D.; Lavie, Nilli
2015-01-01
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”—the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness. SIGNIFICANCE STATEMENT The present work clarifies the neural underpinning of inattentional deafness under high visual load. The findings of near-simultaneous load effects on both visual and auditory evoked responses suggest shared audiovisual processing capacity. Temporary depletion of shared capacity in perceptually demanding visual tasks leads to a momentary reduction in sensory processing of auditory stimuli, resulting in inattentional deafness. The dynamic “push–pull” pattern of load effects on visual and auditory processing furthers our understanding of both the neural mechanisms of attention and of cross-modal effects across visual and auditory processing. These results also offer an explanation for many previous failures to find cross-modal effects in experiments where the visual load effects may not have coincided directly with auditory sensory processing. PMID:26658858
Sci-Fin: Visual Mining Spatial and Temporal Behavior Features from Social Media
Pu, Jiansu; Teng, Zhiyao; Gong, Rui; Wen, Changjiang; Xu, Yang
2016-01-01
Check-in records are usually available in social services, which offer us the opportunity to capture and analyze users’ spatial and temporal behaviors. Mining such behavior features is essential to social analysis and business intelligence. However, the complexity and incompleteness of check-in records bring challenges to achieve such a task. Different from the previous work on social behavior analysis, in this paper, we present a visual analytics system, Social Check-in Fingerprinting (Sci-Fin), to facilitate the analysis and visualization of social check-in data. We focus on three major components of user check-in data: location, activity, and profile. Visual fingerprints for location, activity, and profile are designed to intuitively represent the high-dimensional attributes. To visually mine and demonstrate the behavior features, we integrate WorldMapper and Voronoi Treemap into our glyph-like designs. Such visual fingerprint designs offer us the opportunity to summarize the interesting features and patterns from different check-in locations, activities and users (groups). We demonstrate the effectiveness and usability of our system by conducting extensive case studies on real check-in data collected from a popular microblogging service. Interesting findings are reported and discussed at last. PMID:27999398
Sci-Fin: Visual Mining Spatial and Temporal Behavior Features from Social Media.
Pu, Jiansu; Teng, Zhiyao; Gong, Rui; Wen, Changjiang; Xu, Yang
2016-12-20
Check-in records are usually available in social services, which offer us the opportunity to capture and analyze users' spatial and temporal behaviors. Mining such behavior features is essential to social analysis and business intelligence. However, the complexity and incompleteness of check-in records bring challenges to achieve such a task. Different from the previous work on social behavior analysis, in this paper, we present a visual analytics system, Social Check-in Fingerprinting (Sci-Fin), to facilitate the analysis and visualization of social check-in data. We focus on three major components of user check-in data: location, activity, and profile. Visual fingerprints for location, activity, and profile are designed to intuitively represent the high-dimensional attributes. To visually mine and demonstrate the behavior features, we integrate WorldMapper and Voronoi Treemap into our glyph-like designs. Such visual fingerprint designs offer us the opportunity to summarize the interesting features and patterns from different check-in locations, activities and users (groups). We demonstrate the effectiveness and usability of our system by conducting extensive case studies on real check-in data collected from a popular microblogging service. Interesting findings are reported and discussed at last.
Arakawa, Takahiro; Ando, Eri; Wang, Xin; Kumiko, Miyajima; Kudo, Hiroyuki; Saito, Hirokazu; Mitani, Tomoyo; Takahashi, Mitsuo; Mitsubayashi, Kohji
2012-01-01
A two-dimensional gaseous ethanol visualization system has been developed and demonstrated using a horseradish peroxidase-luminol-hydrogen peroxide system with high-purity luminol solution and a chemiluminescence (CL) enhancer. This system measures ethanol concentrations as intensities of CL via the luminol reaction. CL was emitted when the gaseous ethanol was injected onto an enzyme-immobilized membrane, which was employed as a screen for two-dimensional gas visualization. The average intensity of CL on the substrate was linearly related to the concentration of standard ethanol gas. These results were compared with the CL intensity of the CCD camera recording image in the visualization system. This system is available for gas components not only for spatial but also for temporal analysis in real time. A high-purity sodium salt HG solution (L-HG) instead of standard luminol solution and an enhancer, eosin Y (EY) solution, were adapted for improvement of CL intensity of the system. The visualization of gaseous ethanol was achieved at a detection limit of 3 ppm at optimized concentrations of L-HG solution and EY. Copyright © 2011 John Wiley & Sons, Ltd.
Time reversal focusing of elastic waves in plates for an educational demonstration.
Heaton, Christopher; Anderson, Brian E; Young, Sarah M
2017-02-01
The purpose of this research is to develop a visual demonstration of time reversal focusing of vibrations in a thin plate. Various plate materials are tested to provide optimal conditions for time reversal focusing. Specifically, the reverberation time in each plate and the vibration coupling efficiency from a shaker to the plate are quantified to illustrate why a given plate provides the best spatially confined focus as well as the highest focal amplitude possible. A single vibration speaker and a scanning laser Doppler vibrometer (SLDV) are used to provide the time reversal focusing. Table salt is sprinkled onto the plate surface to allow visualization of the high amplitude, spatially localized time reversal focus; the salt is thrown upward only at the focal position. Spatial mapping of the vibration focusing on the plate using the SLDV is correlated to the visual salt jumping demonstration. The time reversal focusing is also used to knock over an object when the object is placed at the focal position; some discussion of optimal objects to use for this demonstration are given.
Studying the Frictional Force Directions via Bristles
ERIC Educational Resources Information Center
Prasitpong, S.; Chitaree, R.; Rakkapao, S.
2010-01-01
We present simple apparatus designed to help Thai high school students visualize the directions of frictional forces. Bristles of toothbrushes, paintbrushes and scrubbing brushes are used to demonstrate the frictional forces acting in a variety of situations. These demonstrations, when followed by discussion of free-body diagrams, were found to be…
ERIC Educational Resources Information Center
Akakandelwa, Akakandelwa; Munsanje, Joseph
2012-01-01
The aim of this study was to determine the provision of learning and teaching materials for pupils with visual impairment in basic and high schools of Zambia. A survey approach utilizing a questionnaire, interviews and a review of the literature was adopted for the study. The findings demonstrated that most schools in Zambia did not provide…
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Square or sine: finding a waveform with high success rate of eliciting SSVEP.
Teng, Fei; Chen, Yixin; Choong, Aik Min; Gustafson, Scott; Reichley, Christopher; Lawhead, Pamela; Waddell, Dwight
2011-01-01
Steady state visual evoked potential (SSVEP) is the brain's natural electrical potential response for visual stimuli at specific frequencies. Using a visual stimulus flashing at some given frequency will entrain the SSVEP at the same frequency, thereby allowing determination of the subject's visual focus. The faster an SSVEP is identified, the higher information transmission rate the system achieves. Thus, an effective stimulus, defined as one with high success rate of eliciting SSVEP and high signal-noise ratio, is desired. Also, researchers observed that harmonic frequencies often appear in the SSVEP at a reduced magnitude. Are the harmonics in the SSVEP elicited by the fundamental stimulating frequency or by the artifacts of the stimuli? In this paper, we compare the SSVEP responses of three periodic stimuli: square wave (with different duty cycles), triangle wave, and sine wave to find an effective stimulus. We also demonstrate the connection between the strength of the harmonics in SSVEP and the type of stimulus.
High Resolution Visualization Applied to Future Heavy Airlift Concept Development and Evaluation
NASA Technical Reports Server (NTRS)
FordCook, A. B.; King, T.
2012-01-01
This paper explores the use of high resolution 3D visualization tools for exploring the feasibility and advantages of future military cargo airlift concepts and evaluating compatibility with existing and future payload requirements. Realistic 3D graphic representations of future airlifters are immersed in rich, supporting environments to demonstrate concepts of operations to key personnel for evaluation, feedback, and development of critical joint support. Accurate concept visualizations are reviewed by commanders, platform developers, loadmasters, soldiers, scientists, engineers, and key principal decision makers at various stages of development. The insight gained through the review of these physically and operationally realistic visualizations is essential to refining design concepts to meet competing requirements in a fiscally conservative defense finance environment. In addition, highly accurate 3D geometric models of existing and evolving large military vehicles are loaded into existing and proposed aircraft cargo bays. In this virtual aircraft test-loading environment, materiel developers, engineers, managers, and soldiers can realistically evaluate the compatibility of current and next-generation airlifters with proposed cargo.
Visualizing Dynamic Bitcoin Transaction Patterns.
McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J
2016-06-01
This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.
Visualizing Dynamic Bitcoin Transaction Patterns
McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.
2016-01-01
Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715
Top-down influence on the visual cortex of the blind during sensory substitution
Murphy, Matthew C.; Nau, Amy C.; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S.; Chan, Kevin C.
2017-01-01
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. PMID:26584776
Slit-lamp photography and videography with high magnifications
Yuan, Jin; Jiang, Hong; Mao, Xinjie; Ke, Bilian; Yan, Wentao; Liu, Che; Cintrón-Colón, Hector R; Perez, Victor L; Wang, Jianhua
2015-01-01
Purpose To demonstrate the use of the slit-lamp photography and videography with extremely high magnifications for visualizing structures of the anterior segment of the eye. Methods A Canon 60D digital camera with Movie Crop Function was adapted into a Nikon FS-2 slit-lamp to capture still images and video clips of the structures of the anterior segment of the eye. Images obtained using the slit-lamp were tested for spatial resolution. The cornea of human eyes was imaged with the slit-lamp and the structures were compared with the pictures captured using the ultra-high resolution optical coherence tomography (UHR-OCT). The central thickness of the corneal epithelium and total cornea was obtained using the slit-lamp and the results were compared with the thickness obtained using UHR-OCT. Results High-quality ocular images and higher spatial resolutions were obtained by using the slit-lamp with extremely high magnifications and Movie Crop Function, rather than the traditional slit-lamp. The structures and characteristics of the cornea, such as the normal epithelium, abnormal epithelium of corneal intraepithelial neoplasia, LASIK interface, and contact lenses, were clearly visualized using this device. These features were confirmed by comparing the obtained images with those acquired using UHR-OCT. Moreover, the tear film debris on the ocular surface and the corneal nerve in the anterior corneal stroma were also visualized. The thicknesses of the corneal epithelium and total cornea were similar to that measured using UHR-OCT (P < 0.05). Conclusions We demonstrated that the slit-lamp photography and videography with extremely high magnifications allows better visualization of the anterior segment structures of the eye, especially of the epithelium, when compared with the traditional slit-lamp. PMID:26020484
Wang, Sheng; Ding, Miao; Chen, Xuanze; Chang, Lei; Sun, Yujie
2017-01-01
Direct visualization of protein-protein interactions (PPIs) at high spatial and temporal resolution in live cells is crucial for understanding the intricate and dynamic behaviors of signaling protein complexes. Recently, bimolecular fluorescence complementation (BiFC) assays have been combined with super-resolution imaging techniques including PALM and SOFI to visualize PPIs at the nanometer spatial resolution. RESOLFT nanoscopy has been proven as a powerful live-cell super-resolution imaging technique. With regard to the detection and visualization of PPIs in live cells with high temporal and spatial resolution, here we developed a BiFC assay using split rsEGFP2, a highly photostable and reversibly photoswitchable fluorescent protein previously developed for RESOLFT nanoscopy. Combined with parallelized RESOLFT microscopy, we demonstrated the high spatiotemporal resolving capability of a rsEGFP2-based BiFC assay by detecting and visualizing specifically the heterodimerization interactions between Bcl-xL and Bak as well as the dynamics of the complex on mitochondria membrane in live cells. PMID:28663931
Perceptual and cognitive biases in individuals with body dysmorphic disorder symptoms.
Clerkin, Elise M; Teachman, Bethany A
2008-01-01
Given the extreme focus on perceived physical defects in body dysmorphic disorder (BDD), we expected that perceptual and cognitive biases related to physical appearance would be associated with BDD symptomology. To examine these hypotheses, participants ( N = 70) high and low in BDD symptoms completed tasks assessing visual perception and cognition. As expected, there were significant group differences in self-, but not other-, relevant cognitive biases. Perceptual bias results were mixed, with some evidence indicating that individuals high (versus low) in BDD symptoms literally see themselves in a less positive light. Further, individuals high in BDD symptoms failed to demonstrate a normative self-enhancement bias. Overall, this research points to the importance of assessing both cognitive and perceptual biases associated with BDD symptoms, and suggests that visual perception may be influenced by non-visual factors.
Perceptual and cognitive biases in individuals with body dysmorphic disorder symptoms
Clerkin, Elise M.; Teachman, Bethany A.
2012-01-01
Given the extreme focus on perceived physical defects in body dysmorphic disorder (BDD), we expected that perceptual and cognitive biases related to physical appearance would be associated with BDD symptomology. To examine these hypotheses, participants (N = 70) high and low in BDD symptoms completed tasks assessing visual perception and cognition. As expected, there were significant group differences in self-, but not other-, relevant cognitive biases. Perceptual bias results were mixed, with some evidence indicating that individuals high (versus low) in BDD symptoms literally see themselves in a less positive light. Further, individuals high in BDD symptoms failed to demonstrate a normative self-enhancement bias. Overall, this research points to the importance of assessing both cognitive and perceptual biases associated with BDD symptoms, and suggests that visual perception may be influenced by non-visual factors. PMID:25125771
Activity in early visual areas predicts interindividual differences in binocular rivalry dynamics
Yamashiro, Hiroyuki; Mano, Hiroaki; Umeda, Masahiro; Higuchi, Toshihiro; Saiki, Jun
2013-01-01
When dissimilar images are presented to the two eyes, binocular rivalry (BR) occurs, and perception alternates spontaneously between the images. Although neural correlates of the oscillating perception during BR have been found in multiple sites along the visual pathway, the source of BR dynamics is unclear. Psychophysical and modeling studies suggest that both low- and high-level cortical processes underlie BR dynamics. Previous neuroimaging studies have demonstrated the involvement of high-level regions by showing that frontal and parietal cortices responded time locked to spontaneous perceptual alternation in BR. However, a potential contribution of early visual areas to BR dynamics has been overlooked, because these areas also responded to the physical stimulus alternation mimicking BR. In the present study, instead of focusing on activity during perceptual switches, we highlighted brain activity during suppression periods to investigate a potential link between activity in human early visual areas and BR dynamics. We used a strong interocular suppression paradigm called continuous flash suppression to suppress and fluctuate the visibility of a probe stimulus and measured retinotopic responses to the onset of the invisible probe using functional MRI. There were ∼130-fold differences in the median suppression durations across 12 subjects. The individual differences in suppression durations could be predicted by the amplitudes of the retinotopic activity in extrastriate visual areas (V3 and V4v) evoked by the invisible probe. Weaker responses were associated with longer suppression durations. These results demonstrate that retinotopic representations in early visual areas play a role in the dynamics of perceptual alternations during BR. PMID:24353304
Crewther, David P.; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A.; Crewther, Sheila G.
2015-01-01
Saccadic suppression—the reduction of visual sensitivity during rapid eye movements—has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35–45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80–160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement. PMID:27019719
Impaired Driving Performance as Evidence of a Magnocellular Deficit in Dyslexia and Visual Stress.
Fisher, Carri; Chekaluk, Eugene; Irwin, Julia
2015-11-01
High comorbidity and an overlap in symptomology have been demonstrated between dyslexia and visual stress. Several researchers have hypothesized an underlying or causal influence that may account for this relationship. The magnocellular theory of dyslexia proposes that a deficit in visuo-temporal processing can explain symptomology for both disorders. If the magnocellular theory holds true, individuals who experience symptomology for these disorders should show impairment on a visuo-temporal task, such as driving. Eighteen male participants formed the sample for this study. Self-report measures assessed dyslexia and visual stress symptomology as well as participant IQ. Participants completed a drive simulation in which errors in response to road signs were measured. Bivariate correlations revealed significant associations between scores on measures of dyslexia and visual stress. Results also demonstrated that self-reported symptomology predicts magnocellular impairment as measured by performance on a driving task. Results from this study suggest that a magnocellular deficit offers a likely explanation for individuals who report high symptomology across both conditions. While conclusions about the impact of these disorders on driving performance should not be derived from this research alone, this study provides a platform for the development of future research, utilizing a clinical population and on-road driving assessment techniques. Copyright © 2015 John Wiley & Sons, Ltd.
Alsenaidy, Mohammad A.; Kim, Jae Hyun; Majumdar, Ranajoy; Weis, David D.; Joshi, Sangeeta B.; Tolbert, Thomas J.; Middaugh, C. Russell; Volkin, David B.
2013-01-01
The structural integrity and conformational stability of an IgG1 monoclonal antibody (mAb), after partial and complete enzymatic removal of the N-linked Fc glycan, was compared to the untreated mAb over a wide range of temperature (10° to 90°C) and solution pH (3 to 8) using circular dichroism, fluorescence spectroscopy, and static light scattering combined with data visualization employing empirical phase diagrams (EPDs). Subtle to larger stability differences between the different glycoforms were observed. Improved detection of physical stability differences was then demonstrated over narrower pH range (4.0-6.0) using smaller temperature increments, especially when combined with an alternative data visualization method (radar plots). Differential scanning calorimetry and differential scanning fluorimetry were then utilized and also showed an improved ability to detect differences in mAb glycoform physical stability. Based on these results, a two-step methodology was used in which mAb glycoform conformational stability is first screened with a wide variety of instruments and environmental stresses, followed by a second evaluation with optimally sensitive experimental conditions, analytical techniques and data visualization methods. With this approach, high-throughput biophysical analysis to assess relatively subtle conformational stability differences in protein glycoforms is demonstrated. PMID:24114789
Dye-enhanced visualization of rat whiskers for behavioral studies.
Rigosa, Jacopo; Lucantonio, Alessandro; Noselli, Giovanni; Fassihi, Arash; Zorzin, Erik; Manzino, Fabrizio; Pulecchi, Francesca; Diamond, Mathew E
2017-06-14
Visualization and tracking of the facial whiskers is required in an increasing number of rodent studies. Although many approaches have been employed, only high-speed videography has proven adequate for measuring whisker motion and deformation during interaction with an object. However, whisker visualization and tracking is challenging for multiple reasons, primary among them the low contrast of the whisker against its background. Here, we demonstrate a fluorescent dye method suitable for visualization of one or more rat whiskers. The process makes the dyed whisker(s) easily visible against a dark background. The coloring does not influence the behavioral performance of rats trained on a vibrissal vibrotactile discrimination task, nor does it affect the whiskers' mechanical properties.
Peng, Huamao; Gao, Yue; Mao, Xiaofei
2017-02-01
To explore the roles of visual function and cognitive load in aging of inhibition, the present study adopted a 2 (visual perceptual stress: noise, nonnoise) × 2 (cognitive load: low, high) × 2 (age: young, old) mixed design. The Stroop task was adopted to measure inhibition. The task presentation was masked with Gaussian noise according to the visual function of each individual in order to match visual perceptual stress between age groups. The results indicated that age differences in the Stroop effect were influenced by visual function and cognitive load. When the cognitive load was low, older adults exhibited a larger Stroop effect than did younger adults in the nonnoise condition, and this age difference disappeared when the visual noise of the 2 age groups was matched. Conversely, in the high cognitive load condition, we observed significant age differences in the Stroop effect in both the nonnoise and noise conditions. The additional cognitive load made the age differences in the Stroop task reappear even when visual perceptual stress was equivalent. These results demonstrate that visual function plays an important role in the aging of inhibition and its role is moderated by cognitive load. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Efficient Unsteady Flow Visualization with High-Order Access Dependencies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru
We present a novel high-order access dependencies based model for efficient pathline computation in unsteady flow visualization. By taking longer access sequences into account to model more sophisticated data access patterns in particle tracing, our method greatly improves the accuracy and reliability in data access prediction. In our work, high-order access dependencies are calculated by tracing uniformly-seeded pathlines in both forward and backward directions in a preprocessing stage. The effectiveness of our proposed approach is demonstrated through a parallel particle tracing framework with high-order data prefetching. Results show that our method achieves higher data locality and hence improves the efficiencymore » of pathline computation.« less
On the Treatment of Field Quantities and Elemental Continuity in FEM Solutions.
Jallepalli, Ashok; Docampo-Sanchez, Julia; Ryan, Jennifer K; Haimes, Robert; Kirby, Robert M
2018-01-01
As the finite element method (FEM) and the finite volume method (FVM), both traditional and high-order variants, continue their proliferation into various applied engineering disciplines, it is important that the visualization techniques and corresponding data analysis tools that act on the results produced by these methods faithfully represent the underlying data. To state this in another way: the interpretation of data generated by simulation needs to be consistent with the numerical schemes that underpin the specific solver technology. As the verifiable visualization literature has demonstrated: visual artifacts produced by the introduction of either explicit or implicit data transformations, such as data resampling, can sometimes distort or even obfuscate key scientific features in the data. In this paper, we focus on the handling of elemental continuity, which is often only continuous or piecewise discontinuous, when visualizing primary or derived fields from FEM or FVM simulations. We demonstrate that traditional data handling and visualization of these fields introduce visual errors. In addition, we show how the use of the recently proposed line-SIAC filter provides a way of handling elemental continuity issues in an accuracy-conserving manner with the added benefit of casting the data in a smooth context even if the representation is element discontinuous.
NASA Astrophysics Data System (ADS)
Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.
2017-08-01
This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.
Fernandez, Nicolas F.; Gundersen, Gregory W.; Rahman, Adeeb; Grimes, Mark L.; Rikova, Klarisa; Hornbeck, Peter; Ma’ayan, Avi
2017-01-01
Most tools developed to visualize hierarchically clustered heatmaps generate static images. Clustergrammer is a web-based visualization tool with interactive features such as: zooming, panning, filtering, reordering, sharing, performing enrichment analysis, and providing dynamic gene annotations. Clustergrammer can be used to generate shareable interactive visualizations by uploading a data table to a web-site, or by embedding Clustergrammer in Jupyter Notebooks. The Clustergrammer core libraries can also be used as a toolkit by developers to generate visualizations within their own applications. Clustergrammer is demonstrated using gene expression data from the cancer cell line encyclopedia (CCLE), original post-translational modification data collected from lung cancer cells lines by a mass spectrometry approach, and original cytometry by time of flight (CyTOF) single-cell proteomics data from blood. Clustergrammer enables producing interactive web based visualizations for the analysis of diverse biological data. PMID:28994825
Högner, N
2015-08-01
Blind and visually impaired people experience special risks and hazards in road traffic. This refers to participation as a driver, bicycle rider and pedestrian. These risks are shown by a review of international research studies and a study by the author, where 45 people with Usher syndrome were asked about their accident rates and causes as driver, bicycle rider and pedestrian. In addition, basic legal information has been worked out to demonstrate the visual conditions of people with visual impairment for participation in road traffic. The research studies show that blind and visually impaired persons are particularly exposed to experience high risks in traffic. These risks can be reduced through acquisition of skills and coping strategies such as training in orientation and mobility. People with visual impairment need special programmes which help to reduce traffic hazards. Georg Thieme Verlag KG Stuttgart · New York.
Zhao, Jisong
2018-05-17
Wall shear stress is an important quantity in fluid mechanics, but its measurement is a challenging task. An approach to measure wall shear stress vector distribution using shear-sensitive liquid crystal coating (SSLCC) is described. The wall shear stress distribution on the test surface beneath high speed jet flow is measured while using the proposed technique. The flow structures inside the jet flow are captured and the results agree well with the streakline pattern that was visualized using the oil-flow technique. In addition, the shock diamonds inside the supersonic jet flow are visualized clearly using SSLCC and the results are compared with the velocity contour that was measured using the particle image velocimetry (PIV) technique. The work of this paper demonstrates the application of SSLCC in the measurement/visualization of wall shear stress in high speed flow.
Measurement of Wall Shear Stress in High Speed Air Flow Using Shear-Sensitive Liquid Crystal Coating
Zhao, Jisong
2018-01-01
Wall shear stress is an important quantity in fluid mechanics, but its measurement is a challenging task. An approach to measure wall shear stress vector distribution using shear-sensitive liquid crystal coating (SSLCC) is described. The wall shear stress distribution on the test surface beneath high speed jet flow is measured while using the proposed technique. The flow structures inside the jet flow are captured and the results agree well with the streakline pattern that was visualized using the oil-flow technique. In addition, the shock diamonds inside the supersonic jet flow are visualized clearly using SSLCC and the results are compared with the velocity contour that was measured using the particle image velocimetry (PIV) technique. The work of this paper demonstrates the application of SSLCC in the measurement/visualization of wall shear stress in high speed flow. PMID:29772822
Local matrix learning in clustering and applications for manifold visualization.
Arnonkijpanich, Banchar; Hasenfuss, Alexander; Hammer, Barbara
2010-05-01
Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization. 2009 Elsevier Ltd. All rights reserved.
A portfolio of products from the rapid terrain visualization interferometric SAR
NASA Astrophysics Data System (ADS)
Bickel, Douglas L.; Doerry, Armin W.
2007-04-01
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor was built by Sandia National Laboratories for the Joint Programs Sustainment and Development (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieved better than HRTe Level IV position accuracy in near real-time. The system was flown on a deHavilland DHC-7 Army aircraft. This paper presents a collection of images and data products from the Rapid Terrain Visualization interferometric synthetic aperture radar. The imagery includes orthorectified images and DEMs from the RTV interferometric SAR radar.
Top-down influence on the visual cortex of the blind during sensory substitution.
Murphy, Matthew C; Nau, Amy C; Fisher, Christopher; Kim, Seong-Gi; Schuman, Joel S; Chan, Kevin C
2016-01-15
Visual sensory substitution devices provide a non-surgical and flexible approach to vision rehabilitation in the blind. These devices convert images taken by a camera into cross-modal sensory signals that are presented as a surrogate for direct visual input. While previous work has demonstrated that the visual cortex of blind subjects is recruited during sensory substitution, the cognitive basis of this activation remains incompletely understood. To test the hypothesis that top-down input provides a significant contribution to this activation, we performed functional MRI scanning in 11 blind (7 acquired and 4 congenital) and 11 sighted subjects under two conditions: passive listening of image-encoded soundscapes before sensory substitution training and active interpretation of the same auditory sensory substitution signals after a 10-minute training session. We found that the modulation of visual cortex activity due to active interpretation was significantly stronger in the blind over sighted subjects. In addition, congenitally blind subjects showed stronger task-induced modulation in the visual cortex than acquired blind subjects. In a parallel experiment, we scanned 18 blind (11 acquired and 7 congenital) and 18 sighted subjects at rest to investigate alterations in functional connectivity due to visual deprivation. The results demonstrated that visual cortex connectivity of the blind shifted away from sensory networks and toward known areas of top-down input. Taken together, our data support the model of the brain, including the visual system, as a highly flexible task-based and not sensory-based machine. Copyright © 2015 Elsevier Inc. All rights reserved.
An algorithm for encryption of secret images into meaningful images
NASA Astrophysics Data System (ADS)
Kanso, A.; Ghebleh, M.
2017-03-01
Image encryption algorithms typically transform a plain image into a noise-like cipher image, whose appearance is an indication of encrypted content. Bao and Zhou [Image encryption: Generating visually meaningful encrypted images, Information Sciences 324, 2015] propose encrypting the plain image into a visually meaningful cover image. This improves security by masking existence of encrypted content. Following their approach, we propose a lossless visually meaningful image encryption scheme which improves Bao and Zhou's algorithm by making the encrypted content, i.e. distortions to the cover image, more difficult to detect. Empirical results are presented to show high quality of the resulting images and high security of the proposed algorithm. Competence of the proposed scheme is further demonstrated by means of comparison with Bao and Zhou's scheme.
Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer.
Vogel, Sven C; Biwer, Chris M; Rogers, David H; Ahrens, James P; Hackenberg, Robert E; Onken, Drew; Zhang, Jianzhong
2018-06-01
A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U-Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr 3 . A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download.
Interactive visualization of multi-data-set Rietveld analyses using Cinema:Debye-Scherrer
Biwer, Chris M.; Rogers, David H.; Ahrens, James P.; Hackenberg, Robert E.; Onken, Drew; Zhang, Jianzhong
2018-01-01
A tool named Cinema:Debye-Scherrer to visualize the results of a series of Rietveld analyses is presented. The multi-axis visualization of the high-dimensional data sets resulting from powder diffraction analyses allows identification of analysis problems, prediction of suitable starting values, identification of gaps in the experimental parameter space and acceleration of scientific insight from the experimental data. The tool is demonstrated with analysis results from 59 U–Nb alloy samples with different compositions, annealing times and annealing temperatures as well as with a high-temperature study of the crystal structure of CsPbBr3. A script to extract parameters from a series of Rietveld analyses employing the widely used GSAS Rietveld software is also described. Both software tools are available for download. PMID:29896062
Primacy and Recency Effects for Taste
ERIC Educational Resources Information Center
Daniel, Thomas A.; Katz, Jeffrey S.
2018-01-01
Historically, much of what we know about human memory has been discovered in experiments using visual and verbal stimuli. In two experiments, participants demonstrated reliably high recognition for nonverbal liquids. In Experiment 1, participants showed high accuracy for recognizing tastes (bitter, salty, sour, sweet) over a 30-s delay in a…
Ultrasound visual feedback in articulation therapy following partial glossectomy.
Blyth, Katrina M; Mccabe, Patricia; Madill, Catherine; Ballard, Kirrie J
2016-01-01
Disordered speech is common following treatment for tongue cancer, however there is insufficient high quality evidence to guide clinical decision making about treatment. This study investigated use of ultrasound tongue imaging as a visual feedback tool to guide tongue placement during articulation therapy with two participants following partial glossectomy. A Phase I multiple baseline design across behaviors was used to investigate therapeutic effect of ultrasound visual feedback during speech rehabilitation. Percent consonants correct and speech intelligibility at sentence level were used to measure acquisition, generalization and maintenance of speech skills for treated and untreated related phonemes, while unrelated phonemes were tested to demonstrate experimental control. Swallowing and oromotor measures were also taken to monitor change. Sentence intelligibility was not a sensitive measure of speech change, but both participants demonstrated significant change in percent consonants correct for treated phonemes. One participant also demonstrated generalization to non-treated phonemes. Control phonemes along with swallow and oromotor measures remained stable throughout the study. This study establishes therapeutic benefit of ultrasound visual feedback in speech rehabilitation following partial glossectomy. Readers will be able to explain why and how tongue cancer surgery impacts on articulation precision. Readers will also be able to explain the acquisition, generalization and maintenance effects in the study. Copyright © 2016. Published by Elsevier Inc.
Computing and visualizing time-varying merge trees for high-dimensional data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
Techniques for Programming Visual Demonstrations.
ERIC Educational Resources Information Center
Gropper, George L.
Visual demonstrations may be used as part of programs to deliver both content objectives and process objectives. Research has shown that learning of concepts is easier, more accurate, and more broadly applied when it is accompanied by visual examples. The visual examples supporting content learning should emphasize both discrimination and…
Dormal, Giulia; Lepore, Franco; Harissi-Dagher, Mona; Albouy, Geneviève; Bertone, Armando; Rossion, Bruno
2014-01-01
Visual deprivation leads to massive reorganization in both the structure and function of the occipital cortex, raising crucial challenges for sight restoration. We tracked the behavioral, structural, and neurofunctional changes occurring in an early and severely visually impaired patient before and 1.5 and 7 mo after sight restoration with magnetic resonance imaging. Robust presurgical auditory responses were found in occipital cortex despite residual preoperative vision. In primary visual cortex, crossmodal auditory responses overlapped with visual responses and remained elevated even 7 mo after surgery. However, these crossmodal responses decreased in extrastriate occipital regions after surgery, together with improved behavioral vision and with increases in both gray matter density and neural activation in low-level visual regions. Selective responses in high-level visual regions involved in motion and face processing were observable even before surgery and did not evolve after surgery. Taken together, these findings demonstrate that structural and functional reorganization of occipital regions are present in an individual with a long-standing history of severe visual impairment and that such reorganizations can be partially reversed by visual restoration in adulthood. PMID:25520432
An optimized web-based approach for collaborative stereoscopic medical visualization
Kaspar, Mathias; Parsad, Nigel M; Silverstein, Jonathan C
2013-01-01
Objective Medical visualization tools have traditionally been constrained to tethered imaging workstations or proprietary client viewers, typically part of hospital radiology systems. To improve accessibility to real-time, remote, interactive, stereoscopic visualization and to enable collaboration among multiple viewing locations, we developed an open source approach requiring only a standard web browser with no added client-side software. Materials and Methods Our collaborative, web-based, stereoscopic, visualization system, CoWebViz, has been used successfully for the past 2 years at the University of Chicago to teach immersive virtual anatomy classes. It is a server application that streams server-side visualization applications to client front-ends, comprised solely of a standard web browser with no added software. Results We describe optimization considerations, usability, and performance results, which make CoWebViz practical for broad clinical use. We clarify technical advances including: enhanced threaded architecture, optimized visualization distribution algorithms, a wide range of supported stereoscopic presentation technologies, and the salient theoretical and empirical network parameters that affect our web-based visualization approach. Discussion The implementations demonstrate usability and performance benefits of a simple web-based approach for complex clinical visualization scenarios. Using this approach overcomes technical challenges that require third-party web browser plug-ins, resulting in the most lightweight client. Conclusions Compared to special software and hardware deployments, unmodified web browsers enhance remote user accessibility to interactive medical visualization. Whereas local hardware and software deployments may provide better interactivity than remote applications, our implementation demonstrates that a simplified, stable, client approach using standard web browsers is sufficient for high quality three-dimensional, stereoscopic, collaborative and interactive visualization. PMID:23048008
Conceptual Distinctiveness Supports Detailed Visual Long-Term Memory for Real-World Objects
Konkle, Talia; Brady, Timothy F.; Alvarez, George A.; Oliva, Aude
2012-01-01
Humans have a massive capacity to store detailed information in visual long-term memory. The present studies explored the fidelity of these visual long-term memory representations and examined how conceptual and perceptual features of object categories support this capacity. Observers viewed 2,800 object images with a different number of exemplars presented from each category. At test, observers indicated which of 2 exemplars they had previously studied. Memory performance was high and remained quite high (82% accuracy) with 16 exemplars from a category in memory, demonstrating a large memory capacity for object exemplars. However, memory performance decreased as more exemplars were held in memory, implying systematic categorical interference. Object categories with conceptually distinctive exemplars showed less interference in memory as the number of exemplars increased. Interference in memory was not predicted by the perceptual distinctiveness of exemplars from an object category, though these perceptual measures predicted visual search rates for an object target among exemplars. These data provide evidence that observers’ capacity to remember visual information in long-term memory depends more on conceptual structure than perceptual distinctiveness. PMID:20677899
Integrated genome browser: visual analytics platform for genomics.
Freese, Nowlan H; Norris, David C; Loraine, Ann E
2016-07-15
Genome browsers that support fast navigation through vast datasets and provide interactive visual analytics functions can help scientists achieve deeper insight into biological systems. Toward this end, we developed Integrated Genome Browser (IGB), a highly configurable, interactive and fast open source desktop genome browser. Here we describe multiple updates to IGB, including all-new capabilities to display and interact with data from high-throughput sequencing experiments. To demonstrate, we describe example visualizations and analyses of datasets from RNA-Seq, ChIP-Seq and bisulfite sequencing experiments. Understanding results from genome-scale experiments requires viewing the data in the context of reference genome annotations and other related datasets. To facilitate this, we enhanced IGB's ability to consume data from diverse sources, including Galaxy, Distributed Annotation and IGB-specific Quickload servers. To support future visualization needs as new genome-scale assays enter wide use, we transformed the IGB codebase into a modular, extensible platform for developers to create and deploy all-new visualizations of genomic data. IGB is open source and is freely available from http://bioviz.org/igb aloraine@uncc.edu. © The Author 2016. Published by Oxford University Press.
Activation of color-selective areas of the visual cortex in a blind synesthete.
Steven, Megan S; Hansen, Peter C; Blakemore, Colin
2006-02-01
Many areas of the visual cortex are activated when blind people are stimulated naturally through other sensory modalities (e.g., haptically; Sadato et al., 1996). While this extraneous activation of visual areas via other senses in normal blind people might have functional value (Kauffman et al., 2002; Lessard et al., 1998), it does not lead to conscious visual experiences. On the other hand, electrical stimulation of the primary visual cortex in the blind does produce illusory visual phosphenes (Brindley and Lewin, 1968). Here we provide the first evidence that high-level visual areas not only retain their specificity for particular visual characteristics in people who have been blind for long periods, but that activation of these areas can lead to visual sensations. We used fMRI to demonstrate activity in visual cortical areas specifically related to illusory colored and spatially located visual percepts in a synesthetic man who has been completely blind for 10 years. No such differential activations were seen in late-blind or sighted non-synesthetic controls; neither were these areas activated during color-imagery in the late-blind synesthete, implying that this subject's synesthesia is truly a perceptual experience.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
Brain-computer interface for alertness estimation and improving
NASA Astrophysics Data System (ADS)
Hramov, Alexander; Maksimenko, Vladimir; Hramova, Marina
2018-02-01
Using wavelet analysis of the signals of electrical brain activity (EEG), we study the processes of neural activity, associated with perception of visual stimuli. We demonstrate that the brain can process visual stimuli in two scenarios: (i) perception is characterized by destruction of the alpha-waves and increase in the high-frequency (beta) activity, (ii) the beta-rhythm is not well pronounced, while the alpha-wave energy remains unchanged. The special experiments show that the motivation factor initiates the first scenario, explained by the increasing alertness. Based on the obtained results we build the brain-computer interface and demonstrate how the degree of the alertness can be estimated and controlled in real experiment.
Shih, W. J.; Magoun, S.; Lu, G.
1996-01-01
A Tc-99m DISIDA cholescintigraphic study of a 37-year-old patient with a 20-year history of quadriplegia demonstrated dilation of the common bile duct and delayed gallbladder visualization. A concurrent sonographic study showed an enlarged gallbladder with stones and dilation of the common bile duct. These findings were proved by autopsy. Quadriplegia secondary to a high level of spinal cord injury may result in gallbladder dysfunction. Images Figure PMID:8776068
A no-reference image and video visual quality metric based on machine learning
NASA Astrophysics Data System (ADS)
Frantc, Vladimir; Voronin, Viacheslav; Semenishchev, Evgenii; Minkin, Maxim; Delov, Aliy
2018-04-01
The paper presents a novel visual quality metric for lossy compressed video quality assessment. High degree of correlation with subjective estimations of quality is due to using of a convolutional neural network trained on a large amount of pairs video sequence-subjective quality score. We demonstrate how our predicted no-reference quality metric correlates with qualitative opinion in a human observer study. Results are shown on the EVVQ dataset with comparison existing approaches.
Effect of fixation positions on perception of lightness
NASA Astrophysics Data System (ADS)
Toscani, Matteo; Valsecchi, Matteo; Gegenfurtner, Karl R.
2015-03-01
Visual acuity, luminance sensitivity, contrast sensitivity, and color sensitivity are maximal in the fovea and decrease with retinal eccentricity. Therefore every scene is perceived by integrating the small, high resolution samples collected by moving the eyes around. Moreover, when viewing ambiguous figures the fixated position influences the dominance of the possible percepts. Therefore fixations could serve as a selection mechanism whose function is not confined to finely resolve the selected detail of the scene. Here this hypothesis is tested in the lightness perception domain. In a first series of experiments we demonstrated that when observers matched the color of natural objects they based their lightness judgments on objects' brightest parts. During this task the observers tended to fixate points with above average luminance, suggesting a relationship between perception and fixations that we causally proved using a gaze contingent display in a subsequent experiment. Simulations with rendered physical lighting show that higher values in an object's luminance distribution are particularly informative about reflectance. In a second series of experiments we considered a high level strategy that the visual system uses to segment the visual scene in a layered representation. We demonstrated that eye movement sampling mediates between the layer segregation and its effects on lightness perception. Together these studies show that eye fixations are partially responsible for the selection of information from a scene that allows the visual system to estimate the reflectance of a surface.
Majdzadeh, Ali; Lee, Anthony M D; Wang, Hequn; Lui, Harvey; McLean, David I; Crawford, Richard I; Zloty, David; Zeng, Haishan
2015-05-01
Recent advances in biomedical optics have enabled dermal and epidermal components to be visualized at subcellular resolution and assessed noninvasively. Multiphoton microscopy (MPM) and reflectance confocal microscopy (RCM) are noninvasive imaging modalities that have demonstrated promising results in imaging skin micromorphology, and which provide complementary information regarding skin components. This study assesses whether combined MPM/RCM can visualize intracellular and extracellular melanin granules in the epidermis and dermis of normal human skin. We perform MPM and RCM imaging of in vivo and ex vivo skin in the infrared domain. The inherent three-dimensional optical sectioning capability of MPM/RCM is used to image high-contrast granular features across skin depths ranging from 50 to 90 μm. The optical images thus obtained were correlated with conventional histologic examination including melanin-specific staining of ex vivo specimens. MPM revealed highly fluorescent granular structures below the dermal-epidermal junction (DEJ) region. Histochemical staining also demonstrated melanin-containing granules that correlate well in size and location with the granular fluorescent structures observed in MPM. Furthermore, the MPM fluorescence excitation wavelength and RCM reflectance of cell culture-derived melanin were equivalent to those of the granules. This study suggests that MPM can noninvasively visualize and quantify subepidermal melanin in situ. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.
Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara
2017-01-01
Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.
Impact of emerging technologies on future combat aircraft agility
NASA Technical Reports Server (NTRS)
Nguyen, Luat T.; Gilert, William P.
1990-01-01
The foreseeable character of future within-visual-range air combat entails a degree of agility which calls for the integration of high-alpha aerodynamics, thrust vectoring, intimate pilot/vehicle interfaces, and advanced weapons/avionics suites, in prospective configurations. The primary technology-development programs currently contributing to these goals are presently discussed; they encompass the F-15 Short Takeoff and Landing/Maneuver Technology Demonstrator Program, the Enhanced Fighter Maneuverability Program, the High Angle-of-Attack Technology Program, and the X-29 Technology Demonstrator Program.
Experimental Investigation of the Flow Structure over a Delta Wing Via Flow Visualization Methods.
Shen, Lu; Chen, Zong-Nan; Wen, Chihyung
2018-04-23
It is well known that the flow field over a delta wing is dominated by a pair of counter rotating leading edge vortices (LEV). However, their mechanism is not well understood. The flow visualization technique is a promising non-intrusive method to illustrate the complex flow field spatially and temporally. A basic flow visualization setup consists of a high-powered laser and optic lenses to generate the laser sheet, a camera, a tracer particle generator, and a data processor. The wind tunnel setup, the specifications of devices involved, and the corresponding parameter settings are dependent on the flow features to be obtained. Normal smoke wire flow visualization uses a smoke wire to demonstrate the flow streaklines. However, the performance of this method is limited by poor spatial resolution when it is conducted in a complex flow field. Therefore, an improved smoke flow visualization technique has been developed. This technique illustrates the large-scale global LEV flow field and the small-scale shear layer flow structure at the same time, providing a valuable reference for later detailed particle image velocimetry (PIV) measurement. In this paper, the application of the improved smoke flow visualization and PIV measurement to study the unsteady flow phenomena over a delta wing is demonstrated. The procedure and cautions for conducting the experiment are listed, including wind tunnel setup, data acquisition, and data processing. The representative results show that these two flow visualization methods are effective techniques for investigating the three-dimensional flow field qualitatively and quantitatively.
Science & Art in Motion: Visualizing the X-ray Universe
NASA Astrophysics Data System (ADS)
Hobart, A. J.; Arcand, K. K.; Edmonds, P. D.; Tucker, W. H.
2005-12-01
Since its launch in 1999, the Chandra X-ray Observatory has probed regions around black holes, traced the debris of exploded stars, and helped to elucidate the formation of galaxy clusters, the largest bound structures in the Universe. Conveying Chandra's exciting, though often complicated, high-energy results to the public and the media poses certain visual challenges, such as photon-starved observational data, spectra, and esoteric concepts. This poster will demonstrate some of the techniques developed to present visuals by way of motion graphics in X-ray astronomy. Some tricks of the trade will be highlighted, including establishing texture libraries, using particle or paint effects, and modeling stock objects. Topics unique to animating scientific concepts for public consumption will also be discussed, such as addressing public perception versus scientists' findings, keeping a high standard of accuracy while leaving room for visual excitement, communicating with scientists for revisions, and creative ways to interact with and educate the public. Developed with funding from NASA under Contract NAS8-39073.
NASA Technical Reports Server (NTRS)
Chase, W. D.
1975-01-01
The calligraphic chromatic projector described was developed to improve the perceived realism of visual scene simulation ('out-the-window visuals'). The optical arrangement of the projector is illustrated and discussed. The device permits drawing 2000 vectors in as many as 500 colors, all above critical flicker frequencies, and use of high scene resolution and brightness at an acceptable level to the pilot, with the maximum system capabilities of 1000 lines and 1000 fL. The device for generating the colors is discussed, along with an experiment conducted to demonstrate potential improvements in performance and pilot opinion. Current research work and future research plans are noted.
Applications of Java and Vector Graphics to Astrophysical Visualization
NASA Astrophysics Data System (ADS)
Edirisinghe, D.; Budiardja, R.; Chae, K.; Edirisinghe, G.; Lingerfelt, E.; Guidry, M.
2002-12-01
We describe a series of projects utilizing the portability of Java programming coupled with the compact nature of vector graphics (SVG and SWF formats) for setup and control of calculations, local and collaborative visualization, and interactive 2D and 3D animation presentations in astrophysics. Through a set of examples, we demonstrate how such an approach can allow efficient and user-friendly control of calculations in compiled languages such as Fortran 90 or C++ through portable graphical interfaces written in Java, and how the output of such calculations can be packaged in vector-based animation having interactive controls and extremely high visual quality, but very low bandwidth requirements.
Pohlheim, Hartmut
2006-01-01
Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).
Tillmann, Julian; Swettenham, John
2017-02-01
Previous studies examining selective attention in individuals with autism spectrum disorder (ASD) have yielded conflicting results, some suggesting superior focused attention (e.g., on visual search tasks), others demonstrating greater distractibility. This pattern could be accounted for by the proposal (derived by applying the Load theory of attention, e.g., Lavie, 2005) that ASD is characterized by an increased perceptual capacity (Remington, Swettenham, Campbell, & Coleman, 2009). Recent studies in the visual domain support this proposal. Here we hypothesize that ASD involves an enhanced perceptual capacity that also operates across sensory modalities, and test this prediction, for the first time using a signal detection paradigm. Seventeen neurotypical (NT) and 15 ASD adolescents performed a visual search task under varying levels of visual perceptual load while simultaneously detecting presence/absence of an auditory tone embedded in noise. Detection sensitivity (d') for the auditory stimulus was similarly high for both groups in the low visual perceptual load condition (e.g., 2 items: p = .391, d = 0.31, 95% confidence interval [CI] [-0.39, 1.00]). However, at a higher level of visual load, auditory d' reduced for the NT group but not the ASD group, leading to a group difference (p = .002, d = 1.2, 95% CI [0.44, 1.96]). As predicted, when visual perceptual load was highest, both groups then showed a similarly low auditory d' (p = .9, d = 0.05, 95% CI [-0.65, 0.74]). These findings demonstrate that increased perceptual capacity in ASD operates across modalities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Learning and Recognition of Clothing Genres From Full-Body Images.
Hidayati, Shintami C; You, Chuang-Wen; Cheng, Wen-Huang; Hua, Kai-Lung
2018-05-01
According to the theory of clothing design, the genres of clothes can be recognized based on a set of visually differentiable style elements, which exhibit salient features of visual appearance and reflect high-level fashion styles for better describing clothing genres. Instead of using less-discriminative low-level features or ambiguous keywords to identify clothing genres, we proposed a novel approach for automatically classifying clothing genres based on the visually differentiable style elements. A set of style elements, that are crucial for recognizing specific visual styles of clothing genres, were identified based on the clothing design theory. In addition, the corresponding salient visual features of each style element were identified and formulated with variables that can be computationally derived with various computer vision algorithms. To evaluate the performance of our algorithm, a dataset containing 3250 full-body shots crawled from popular online stores was built. Recognition results show that our proposed algorithms achieved promising overall precision, recall, and -score of 88.76%, 88.53%, and 88.64% for recognizing upperwear genres, and 88.21%, 88.17%, and 88.19% for recognizing lowerwear genres, respectively. The effectiveness of each style element and its visual features on recognizing clothing genres was demonstrated through a set of experiments involving different sets of style elements or features. In summary, our experimental results demonstrate the effectiveness of the proposed method in clothing genre recognition.
Attentional Blink in Young People with High-Functioning Autism and Asperger's Disorder
ERIC Educational Resources Information Center
Rinehart, Nicole; Tonge, Bruce; Brereton, Avril; Bradshaw, John
2010-01-01
The aim of the study was to examine the temporal characteristics of information processing in individuals with high-functioning autism and Asperger's disorder using a rapid serial visual presentation paradigm. The results clearly showed that such people demonstrate an attentional blink of similar magnitude to comparison groups. This supports the…
Faster than "g", Revisited with High-Speed Imaging
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The introduction of modern high-speed cameras in physics teaching provides a tool not only for easy visualization, but also for quantitative analysis of many simple though fast occurring phenomena. As an example, we present a very well-known demonstration experiment--sometimes also discussed in the context of falling chimneys--which is commonly…
Lighten the Load: Scaffolding Visual Literacy in Biochemistry and Molecular Biology
Offerdahl, Erika G.; Arneson, Jessie B.; Byrne, Nicholas
2017-01-01
The development of scientific visual literacy has been identified as critical to the training of tomorrow’s scientists and citizens alike. Within the context of the molecular life sciences in particular, visual representations frequently incorporate various components, such as discipline-specific graphical and diagrammatic features, varied levels of abstraction, and spatial arrangements of visual elements to convey information. Visual literacy is achieved when an individual understands the various ways in which a discipline uses these components to represent a particular way of knowing. Owing to the complex nature of visual representations, the activities through which visual literacy is developed have high cognitive load. Cognitive load can be reduced by first helping students to become fluent with the discrete components of visual representations before asking them to simultaneously integrate these components to extract the intended meaning of a representation. We present a taxonomy for characterizing one component of visual representations—the level of abstraction—as a first step in understanding the opportunities afforded students to develop fluency. Further, we demonstrate how our taxonomy can be used to analyze course assessments and spur discussions regarding the extent to which the development of visual literacy skills is supported by instruction within an undergraduate biochemistry curriculum. PMID:28130273
Optical Histology: High-Resolution Visualization of Tissue Microvasculature
NASA Astrophysics Data System (ADS)
Moy, Austin Jing-Ming
Mammalian tissue requires the delivery of nutrients, growth factors, and the exchange of oxygen and carbon dioxide gases to maintain normal function. These elements are delivered by the blood, which travels through the connected network of blood vessels, known as the vascular system. The vascular system consists of large feeder blood vessels (arteries and veins) that are connected to the small blood vessels (arterioles and venules), which in turn are connected to the capillaries that are directly connected to the tissue and facilitate gas exchange and nutrient delivery. These small blood vessels and capillaries make up an intricate but organized network of blood vessels that exist in all mammalian tissues known as the microvasculature and are very important in maintaining the health and proper function of mammalian tissue. Due to the importance of the microvasculature in tissue survival, disruption of the microvasculature typically leads to tissue dysfunction and tissue death. The most prevalent method to study the microvasculature is visualization. Immunohistochemistry (IHC) is the gold-standard method to visualize tissue microvasculature. IHC is very well-suited for highly detailed interrogation of the tissue microvasculature at the cellular level but is unwieldy and impractical for wide-field visualization of the tissue microvasculature. The objective my dissertation research was to develop a method to enable wide-field visualization of the microvasculature, while still retaining the high-resolution afforded by optical microscopy. My efforts led to the development of a technique dubbed "optical histology" that combines chemical and optical methods to enable high-resolution visualization of the microvasculature. The development of the technique first involved preliminary studies to quantify optical property changes in optically cleared tissues, followed by development and demonstration of the methodology. Using optical histology, I successfully obtained high resolution, depth sectioned images of the microvasculature in mouse brain and the coronary microvasculature in mouse heart. Future directions of optical histology include the potential to facilitate visualization of the entire microvascular structure of an organ as well as visualization of other tissue molecular markers of interest.
Mechanisms of migraine aura revealed by functional MRI in human visual cortex
Hadjikhani, Nouchine; Sanchez del Rio, Margarita; Wu, Ona; Schwartz, Denis; Bakker, Dick; Fischl, Bruce; Kwong, Kenneth K.; Cutrer, F. Michael; Rosen, Bruce R.; Tootell, Roger B. H.; Sorensen, A. Gregory; Moskowitz, Michael A.
2001-01-01
Cortical spreading depression (CSD) has been suggested to underlie migraine visual aura. However, it has been challenging to test this hypothesis in human cerebral cortex. Using high-field functional MRI with near-continuous recording during visual aura in three subjects, we observed blood oxygenation level-dependent (BOLD) signal changes that demonstrated at least eight characteristics of CSD, time-locked to percept/onset of the aura. Initially, a focal increase in BOLD signal (possibly reflecting vasodilation), developed within extrastriate cortex (area V3A). This BOLD change progressed contiguously and slowly (3.5 ± 1.1 mm/min) over occipital cortex, congruent with the retinotopy of the visual percept. Following the same retinotopic progression, the BOLD signal then diminished (possibly reflecting vasoconstriction after the initial vasodilation), as did the BOLD response to visual activation. During periods with no visual stimulation, but while the subject was experiencing scintillations, BOLD signal followed the retinotopic progression of the visual percept. These data strongly suggest that an electrophysiological event such as CSD generates the aura in human visual cortex. PMID:11287655
Savant Genome Browser 2: visualization and analysis for population-scale genomics.
Fiume, Marc; Smith, Eric J M; Brook, Andrew; Strbenac, Dario; Turner, Brian; Mezlini, Aziz M; Robinson, Mark D; Wodak, Shoshana J; Brudno, Michael
2012-07-01
High-throughput sequencing (HTS) technologies are providing an unprecedented capacity for data generation, and there is a corresponding need for efficient data exploration and analysis capabilities. Although most existing tools for HTS data analysis are developed for either automated (e.g. genotyping) or visualization (e.g. genome browsing) purposes, such tools are most powerful when combined. For example, integration of visualization and computation allows users to iteratively refine their analyses by updating computational parameters within the visual framework in real-time. Here we introduce the second version of the Savant Genome Browser, a standalone program for visual and computational analysis of HTS data. Savant substantially improves upon its predecessor and existing tools by introducing innovative visualization modes and navigation interfaces for several genomic datatypes, and synergizing visual and automated analyses in a way that is powerful yet easy even for non-expert users. We also present a number of plugins that were developed by the Savant Community, which demonstrate the power of integrating visual and automated analyses using Savant. The Savant Genome Browser is freely available (open source) at www.savantbrowser.com.
Savant Genome Browser 2: visualization and analysis for population-scale genomics
Smith, Eric J. M.; Brook, Andrew; Strbenac, Dario; Turner, Brian; Mezlini, Aziz M.; Robinson, Mark D.; Wodak, Shoshana J.; Brudno, Michael
2012-01-01
High-throughput sequencing (HTS) technologies are providing an unprecedented capacity for data generation, and there is a corresponding need for efficient data exploration and analysis capabilities. Although most existing tools for HTS data analysis are developed for either automated (e.g. genotyping) or visualization (e.g. genome browsing) purposes, such tools are most powerful when combined. For example, integration of visualization and computation allows users to iteratively refine their analyses by updating computational parameters within the visual framework in real-time. Here we introduce the second version of the Savant Genome Browser, a standalone program for visual and computational analysis of HTS data. Savant substantially improves upon its predecessor and existing tools by introducing innovative visualization modes and navigation interfaces for several genomic datatypes, and synergizing visual and automated analyses in a way that is powerful yet easy even for non-expert users. We also present a number of plugins that were developed by the Savant Community, which demonstrate the power of integrating visual and automated analyses using Savant. The Savant Genome Browser is freely available (open source) at www.savantbrowser.com. PMID:22638571
Perceptual load influences selective attention across development.
Couperus, Jane W
2011-09-01
Research suggests that visual selective attention develops across childhood. However, there is relatively little understanding of the neurological changes that accompany this development, particularly in the context of adult theories of selective attention, such as N. Lavie's (1995) perceptual load theory of attention. This study examined visual selective attention across development from 7 years of age to adulthood. Specifically, the author examined if changes in processing as a function of selective attention are similarly influenced by perceptual load across development. Participants were asked to complete a task at either low or high perceptual load while processing of an unattended probe stimulus was examined using event related potentials. Similar to adults, children and teens showed reduced processing of the unattended stimulus as perceptual load increased at the P1 visual component. However, although there were no qualitative differences in changes in processing, there were quantitative differences, with shorter P1 latencies in teens and adults compared with children, suggesting increases in the speed of processing across development. In addition, younger children did not need as high a perceptual load to achieve the same difference in performance between low and high perceptual load as adults. Thus, this study demonstrates that although there are developmental changes in visual selective attention, the mechanisms by which visual selective attention is achieved in children may share similarities with adults.
Douglass, John K; Wehling, Martin F
2016-12-01
A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.
Yang, Xiaojie; Lorenser, Dirk; McLaughlin, Robert A.; Kirk, Rodney W.; Edmond, Matthew; Simpson, M. Cather; Grounds, Miranda D.; Sampson, David D.
2013-01-01
We have developed an extremely miniaturized optical coherence tomography (OCT) needle probe (outer diameter 310 µm) with high sensitivity (108 dB) to enable minimally invasive imaging of cellular structure deep within skeletal muscle. Three-dimensional volumetric images were acquired from ex vivo mouse tissue, examining both healthy and pathological dystrophic muscle. Individual myofibers were visualized as striations in the images. Degradation of cellular structure in necrotic regions was seen as a loss of these striations. Tendon and connective tissue were also visualized. The observed structures were validated against co-registered hematoxylin and eosin (H&E) histology sections. These images of internal cellular structure of skeletal muscle acquired with an OCT needle probe demonstrate the potential of this technique to visualize structure at the microscopic level deep in biological tissue in situ. PMID:24466482
Vabalas, Andrius; Freeth, Megan
2016-01-01
The current study investigated whether the amount of autistic traits shown by an individual is associated with viewing behaviour during a face-to-face interaction. The eye movements of 36 neurotypical university students were recorded using a mobile eye-tracking device. High amounts of autistic traits were neither associated with reduced looking to the social partner overall, nor with reduced looking to the face. However, individuals who were high in autistic traits exhibited reduced visual exploration during the face-to-face interaction overall, as demonstrated by shorter and less frequent saccades. Visual exploration was not related to social anxiety. This study suggests that there are systematic individual differences in visual exploration during social interactions and these are related to amount of autistic traits.
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-20
... specific projects must demonstrate high artistic ability, excellent interpersonal skills, and be conversant... progress towards outcomes or the results achieved. Examples of outputs include the number of people trained...
Visual information processing of faces in body dysmorphic disorder.
Feusner, Jamie D; Townsend, Jennifer; Bystritsky, Alexander; Bookheimer, Susan
2007-12-01
Body dysmorphic disorder (BDD) is a severe psychiatric condition in which individuals are preoccupied with perceived appearance defects. Clinical observation suggests that patients with BDD focus on details of their appearance at the expense of configural elements. This study examines abnormalities in visual information processing in BDD that may underlie clinical symptoms. To determine whether patients with BDD have abnormal patterns of brain activation when visually processing others' faces with high, low, or normal spatial frequency information. Case-control study. University hospital. Twelve right-handed, medication-free subjects with BDD and 13 control subjects matched by age, sex, and educational achievement. Intervention Functional magnetic resonance imaging while performing matching tasks of face stimuli. Stimuli were neutral-expression photographs of others' faces that were unaltered, altered to include only high spatial frequency visual information, or altered to include only low spatial frequency visual information. Blood oxygen level-dependent functional magnetic resonance imaging signal changes in the BDD and control groups during tasks with each stimulus type. Subjects with BDD showed greater left hemisphere activity relative to controls, particularly in lateral prefrontal cortex and lateral temporal lobe regions for all face tasks (and dorsal anterior cingulate activity for the low spatial frequency task). Controls recruited left-sided prefrontal and dorsal anterior cingulate activity only for the high spatial frequency task. Subjects with BDD demonstrate fundamental differences from controls in visually processing others' faces. The predominance of left-sided activity for low spatial frequency and normal faces suggests detail encoding and analysis rather than holistic processing, a pattern evident in controls only for high spatial frequency faces. These abnormalities may be associated with apparent perceptual distortions in patients with BDD. The fact that these findings occurred while subjects viewed others' faces suggests differences in visual processing beyond distortions of their own appearance.
WarpIV: In situ visualization and analysis of ion accelerator simulations
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...
2016-05-09
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object.
Persuh, Marjan; Melara, Robert D
2016-01-01
In two experiments, we evaluated whether a perceiver's prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision.
Barack Obama Blindness (BOB): Absence of Visual Awareness to a Single Object
Persuh, Marjan; Melara, Robert D.
2016-01-01
In two experiments, we evaluated whether a perceiver’s prior expectations could alone obliterate his or her awareness of a salient visual stimulus. To establish expectancy, observers first made a demanding visual discrimination on each of three baseline trials. Then, on a fourth, critical trial, a single, salient and highly visible object appeared in full view at the center of the visual field and in the absence of any competing visual input. Surprisingly, fully half of the participants were unaware of the solitary object in front of their eyes. Dramatically, observers were blind even when the only stimulus on display was the face of U.S. President Barack Obama. We term this novel, counterintuitive phenomenon, Barack Obama Blindness (BOB). Employing a method that rules out putative memory effects by probing awareness immediately after presentation of the critical stimulus, we demonstrate that the BOB effect is a true failure of conscious vision. PMID:27047362
Ji, Xiaonan; Machiraju, Raghu; Ritter, Alan; Yen, Po-Yin
2017-01-01
Systematic Reviews (SRs) of biomedical literature summarize evidence from high-quality studies to inform clinical decisions, but are time and labor intensive due to the large number of article collections. Article similarities established from textual features have been shown to assist in the identification of relevant articles, thus facilitating the article screening process efficiently. In this study, we visualized article similarities to extend its utilization in practical settings for SR researchers, aiming to promote human comprehension of article distributions and hidden patterns. To prompt an effective visualization in an interpretable, intuitive, and scalable way, we implemented a graph-based network visualization with three network sparsification approaches and a distance-based map projection via dimensionality reduction. We evaluated and compared three network sparsification approaches and the visualization types (article network vs. article map). We demonstrated the effectiveness in revealing article distribution and exhibiting clustering patterns of relevant articles with practical meanings for SRs.
Heyers, Dominik; Manns, Martina; Luksch, Harald; Güntürkün, Onur; Mouritsen, Henrik
2007-09-26
The magnetic compass of migratory birds has been suggested to be light-dependent. Retinal cryptochrome-expressing neurons and a forebrain region, "Cluster N", show high neuronal activity when night-migratory songbirds perform magnetic compass orientation. By combining neuronal tracing with behavioral experiments leading to sensory-driven gene expression of the neuronal activity marker ZENK during magnetic compass orientation, we demonstrate a functional neuronal connection between the retinal neurons and Cluster N via the visual thalamus. Thus, the two areas of the central nervous system being most active during magnetic compass orientation are part of an ascending visual processing stream, the thalamofugal pathway. Furthermore, Cluster N seems to be a specialized part of the visual wulst. These findings strongly support the hypothesis that migratory birds use their visual system to perceive the reference compass direction of the geomagnetic field and that migratory birds "see" the reference compass direction provided by the geomagnetic field.
The effect of spatial attention on invisible stimuli.
Shin, Kilho; Stolte, Moritz; Chong, Sang Chul
2009-10-01
The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.
Beyond simple charts: Design of visualizations for big health data
Ola, Oluwakemi; Sedig, Kamran
2016-01-01
Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416
Beyond simple charts: Design of visualizations for big health data.
Ola, Oluwakemi; Sedig, Kamran
2016-01-01
Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.
The rainfall plot: its motivation, characteristics and pitfalls.
Domanska, Diana; Vodák, Daniel; Lund-Andersen, Christin; Salvatore, Stefania; Hovig, Eivind; Sandve, Geir Kjetil
2017-05-18
A visualization referred to as rainfall plot has recently gained popularity in genome data analysis. The plot is mostly used for illustrating the distribution of somatic cancer mutations along a reference genome, typically aiming to identify mutation hotspots. In general terms, the rainfall plot can be seen as a scatter plot showing the location of events on the x-axis versus the distance between consecutive events on the y-axis. Despite its frequent use, the motivation for applying this particular visualization and the appropriateness of its usage have never been critically addressed in detail. We show that the rainfall plot allows visual detection even for events occurring at high frequency over very short distances. In addition, event clustering at multiple scales may be detected as distinct horizontal bands in rainfall plots. At the same time, due to the limited size of standard figures, rainfall plots might suffer from inability to distinguish overlapping events, especially when multiple datasets are plotted in the same figure. We demonstrate the consequences of plot congestion, which results in obscured visual data interpretations. This work provides the first comprehensive survey of the characteristics and proper usage of rainfall plots. We find that the rainfall plot is able to convey a large amount of information without any need for parameterization or tuning. However, we also demonstrate how plot congestion and the use of a logarithmic y-axis may result in obscured visual data interpretations. To aid the productive utilization of rainfall plots, we demonstrate their characteristics and potential pitfalls using both simulated and real data, and provide a set of practical guidelines for their proper interpretation and usage.
Communication scheme based on evolutionary spatial 2×2 games
NASA Astrophysics Data System (ADS)
Ziaukas, Pranas; Ragulskis, Tautvydas; Ragulskis, Minvydas
2014-06-01
A visual communication scheme based on evolutionary spatial 2×2 games is proposed in this paper. Self-organizing patterns induced by complex interactions between competing individuals are exploited for hiding and transmitting secret visual information. Properties of the proposed communication scheme are discussed in details. It is shown that the hiding capacity of the system (the minimum size of the detectable primitives and the minimum distance between two primitives) is sufficient for the effective transmission of digital dichotomous images. Also, it is demonstrated that the proposed communication scheme is resilient to time backwards, plain image attacks and is highly sensitive to perturbations of private and public keys. Several computational experiments are used to demonstrate the effectiveness of the proposed communication scheme.
Takahata, Keisuke; Saito, Fumie; Muramatsu, Taro; Yamada, Makiko; Shirahase, Joichiro; Tabuchi, Hajime; Suhara, Tetsuya; Mimura, Masaru; Kato, Motoichiro
2014-05-01
Over the last two decades, evidence of enhancement of drawing and painting skills due to focal prefrontal damage has accumulated. It is of special interest that most artworks created by such patients were highly realistic ones, but the mechanism underlying this phenomenon remains to be understood. Our hypothesis is that enhanced tendency of realism was associated with accuracy of visual numerosity representation, which has been shown to be mediated predominantly by right parietal functions. Here, we report a case of left prefrontal stroke, where the patient showed enhancement of artistic skills of realistic painting after the onset of brain damage. We investigated cognitive, functional and esthetic characteristics of the patient׳s visual artistry and visual numerosity representation. Neuropsychological tests revealed impaired executive function after the stroke. Despite that, the patient׳s visual artistry related to realism was rather promoted across the onset of brain damage as demonstrated by blind evaluation of the paintings by professional art reviewers. On visual numerical cognition tasks, the patient showed higher performance in comparison with age-matched healthy controls. These results paralleled increased perfusion in the right parietal cortex including the precuneus and intraparietal sulcus. Our data provide new insight into mechanisms underlying change in artistic style due to focal prefrontal lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.
Corridor One:An Integrated Distance Visualization Enuronments for SSI+ASCI Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christopher R. Johnson, Charles D. Hansen
2001-10-29
The goal of Corridor One: An Integrated Distance Visualization Environment for ASCI and SSI Application was to combine the forces of six leading edge laboratories working in the areas of visualization and distributed computing and high performance networking (Argonne National Laboratory, Lawrence Berkeley National Laboratory, Los Alamos National Laboratory, University of Illinois, University of Utah and Princeton University) to develop and deploy the most advanced integrated distance visualization environment for large-scale scientific visualization and demonstrate it on applications relevant to the DOE SSI and ASCI programs. The Corridor One team brought world class expertise in parallel rendering, deep image basedmore » rendering, immersive environment technology, large-format multi-projector wall based displays, volume and surface visualization algorithms, collaboration tools and streaming media technology, network protocols for image transmission, high-performance networking, quality of service technology and distributed computing middleware. Our strategy was to build on the very successful teams that produced the I-WAY, ''Computational Grids'' and CAVE technology and to add these to the teams that have developed the fastest parallel visualizations systems and the most widely used networking infrastructure for multicast and distributed media. Unfortunately, just as we were getting going on the Corridor One project, DOE cut the program after the first year. As such, our final report consists of our progress during year one of the grant.« less
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
The relationship between level of autistic traits and local bias in the context of the McGurk effect
Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio
2015-01-01
The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705
Neurons in the human hippocampus and amygdala respond to both low- and high-level image properties
Cabrales, Elaine; Wilson, Michael S.; Baker, Christopher P.; Thorp, Christopher K.; Smith, Kris A.; Treiman, David M.
2011-01-01
A large number of studies have demonstrated that structures within the medial temporal lobe, such as the hippocampus, are intimately involved in declarative memory for objects and people. Although these items are abstractions of the visual scene, specific visual details can change the speed and accuracy of their recall. By recording from 415 neurons in the hippocampus and amygdala of human epilepsy patients as they viewed images drawn from 10 image categories, we showed that the firing rates of 8% of these neurons encode image illuminance and contrast, low-level properties not directly pertinent to task performance, whereas in 7% of the neurons, firing rates encode the category of the item depicted in the image, a high-level property pertinent to the task. This simultaneous representation of high- and low-level image properties within the same brain areas may serve to bind separate aspects of visual objects into a coherent percept and allow episodic details of objects to influence mnemonic performance. PMID:21471400
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual short-term memory load reduces retinotopic cortex response to contrast.
Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli
2012-11-01
Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
The recent introduction of inexpensive high-speed cameras offers a new experimental approach to many simple but fast-occurring events in physics. In this paper, the authors present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects…
Chang, Gregory; Deniz, Cem M; Honig, Stephen; Egol, Kenneth; Regatte, Ravinder R; Zhu, Yudong; Sodickson, Daniel K; Brown, Ryan
2014-06-01
To demonstrate the feasibility of performing bone microarchitecture, high-resolution cartilage, and clinical imaging of the hip at 7T. This study had Institutional Review Board approval. Using an 8-channel coil constructed in-house, we imaged the hips of 15 subjects on a 7T magnetic resonance imaging (MRI) scanner. We applied: 1) a T1-weighted 3D fast low angle shot (3D FLASH) sequence (0.23 × 0.23 × 1-1.5 mm(3) ) for bone microarchitecture imaging; 2) T1-weighted 3D FLASH (water excitation) and volumetric interpolated breath-hold examination (VIBE) sequences (0.23 × 0.23 × 1.5 mm(3) ) with saturation or inversion recovery-based fat suppression for cartilage imaging; 3) 2D intermediate-weighted fast spin-echo (FSE) sequences without and with fat saturation (0.27 × 0.27 × 2 mm) for clinical imaging. Bone microarchitecture images allowed visualization of individual trabeculae within the proximal femur. Cartilage was well visualized and fat was well suppressed on FLASH and VIBE sequences. FSE sequences allowed visualization of cartilage, the labrum (including cartilage and labral pathology), joint capsule, and tendons. This is the first study to demonstrate the feasibility of performing a clinically comprehensive hip MRI protocol at 7T, including high-resolution imaging of bone microarchitecture and cartilage, as well as clinical imaging. Copyright © 2013 Wiley Periodicals, Inc.
Coherence across consciousness levels: Symmetric visual displays spare working memory resources.
Dumitru, Magda L
2015-12-15
Two studies demonstrate that the need for coherence could nudge individuals to use structural similarities between binary visual displays and two concurrent cognitive tasks to unduly solve the latter in similar fashion. In an overt truth-judgement task, participants decided whether symmetric colourful displays matched conjunction or disjunction descriptions (e.g., "the black and/or the orange"). In the simultaneous covert categorisation task, they decided whether a colour name (e.g., "black") described a two-colour object or half of a single-colour object. Two response patterns emerged as follows. Participants either acknowledged or rejected matches between disjunction descriptions and two visual stimuli and, similarly, either acknowledged or rejected matches between single colour names and two-colour objects or between single colour names and half of single-colour objects. These findings confirm the coherence hypothesis, highlight the role of coherence in preserving working-memory resources, and demonstrate an interaction between high-level and low-level consciousness. Copyright © 2015 Elsevier Inc. All rights reserved.
[Working memory and work with memory: visual-spatial and further components of processing].
Velichkovsky, B M; Challis, B H; Pomplun, M
1995-01-01
Empirical and theoretical evidence for the concept of working memory is considered. We argue that the major weakness of this concept is its loose connection with the knowledge about background perceptive and cognitive processes. Results of two relevant experiments are provided. The first study demonstrated the classical chunking effect in a speeded visual search and comparison task, the proper domain of a large-capacity very short term sensory store. Our second study was a kind of extended levels-of-processing experiment. We attempted to manipulate visual, phonological, and (different) executive components of long-term memory in the hope of finding some systematic relationships between these forms of processing. Indeed, the results demonstrated a high degree of systematicity without any apparent need for a concept such as working memory for the explanation. Accordingly, the place for working memory is at all the interfaces where our metacognitive strategies interfere with mostly domain-specific cognitive mechanisms. Working memory is simply our work with memory.
Early visual processing is enhanced in the midluteal phase of the menstrual cycle.
Lusk, Bethany R; Carr, Andrea R; Ranson, Valerie A; Bryant, Richard A; Felmingham, Kim L
2015-12-01
Event-related potential (ERP) studies have revealed an early attentional bias in processing unpleasant emotional images in women. Recent neuroimaging data suggests there are significant differences in cortical emotional processing according to menstrual phase. This study examined the impact of menstrual phase on visual emotional processing in women compared to men. ERPs were recorded from 28 early follicular women, 29 midluteal women, and 27 men while they completed a passive viewing task of neutral and low- and high- arousing pleasant and unpleasant images. There was a significant effect of menstrual phase in early visual processing, as midluteal women displayed significantly greater P1 amplitude at occipital regions to all visual images compared to men. Both midluteal and early follicular women displayed larger N1 amplitudes than men (although this only reached significance for the midluteal group) to the visual images. No sex or menstrual phase differences were apparent in later N2, P3, or LPP. A condition effect demonstrated greater P3 and LPP amplitude to highly-arousing unpleasant images relative to all other stimuli conditions. These results indicate that women have greater early automatic visual processing compared to men, and suggests that this effect is particularly strong in women in the midluteal phase at the earliest stage of visual attention processing. Our findings highlight the importance of considering menstrual phase when examining sex differences in the cortical processing of visual stimuli. Copyright © 2015 Elsevier Ltd. All rights reserved.
Value-cell bar charts for visualizing large transaction data sets.
Keim, Daniel A; Hao, Ming C; Dayal, Umeshwar; Lyons, Martha
2007-01-01
One of the common problems businesses need to solve is how to use large volumes of sales histories, Web transactions, and other data to understand the behavior of their customers and increase their revenues. Bar charts are widely used for daily analysis, but only show highly aggregated data. Users often need to visualize detailed multidimensional information reflecting the health of their businesses. In this paper, we propose an innovative visualization solution based on the use of value cells within bar charts to represent business metrics. The value of a transaction can be discretized into one or multiple cells: high-value transactions are mapped to multiple value cells, whereas many small-value transactions are combined into one cell. With value-cell bar charts, users can 1) visualize transaction value distributions and correlations, 2) identify high-value transactions and outliers at a glance, and 3) instantly display values at the transaction record level. Value-Cell Bar Charts have been applied with success to different sales and IT service usage applications, demonstrating the benefits of the technique over traditional charting techniques. A comparison with two variants of the well-known Treemap technique and our earlier work on Pixel Bar Charts is also included.
The visual analysis of emotional actions.
Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie
2006-01-01
Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.
Vertical visual features have a strong influence on cuttlefish camouflage.
Ulmer, K M; Buresch, K C; Kossodo, M M; Mäthger, L M; Siemann, L A; Hanlon, R T
2013-04-01
Cuttlefish and other cephalopods use visual cues from their surroundings to adaptively change their body pattern for camouflage. Numerous previous experiments have demonstrated the influence of two-dimensional (2D) substrates (e.g., sand and gravel habitats) on camouflage, yet many marine habitats have varied three-dimensional (3D) structures among which cuttlefish camouflage from predators, including benthic predators that view cuttlefish horizontally against such 3D backgrounds. We conducted laboratory experiments, using Sepia officinalis, to test the relative influence of horizontal versus vertical visual cues on cuttlefish camouflage: 2D patterns on benthic substrates were tested versus 2D wall patterns and 3D objects with patterns. Specifically, we investigated the influence of (i) quantity and (ii) placement of high-contrast elements on a 3D object or a 2D wall, as well as (iii) the diameter and (iv) number of 3D objects with high-contrast elements on cuttlefish body pattern expression. Additionally, we tested the influence of high-contrast visual stimuli covering the entire 2D benthic substrate versus the entire 2D wall. In all experiments, visual cues presented in the vertical plane evoked the strongest body pattern response in cuttlefish. These experiments support field observations that, in some marine habitats, cuttlefish will respond to vertically oriented background features even when the preponderance of visual information in their field of view seems to be from the 2D surrounding substrate. Such choices highlight the selective decision-making that occurs in cephalopods with their adaptive camouflage capability.
Kling-Petersen, T; Pascher, R; Rydmark, M
1999-01-01
Academic and medical imaging are increasingly using computer based 3D reconstruction and/or visualization. Three-dimensional interactive models play a major role in areas such as preclinical medical education, clinical visualization and medical research. While 3D is comparably easy to do on a high end workstations, distribution and use of interactive 3D graphics necessitate the use of personal computers and the web. Several new techniques have been demonstrated providing interactive 3D via a web browser thereby allowing a limited version of VR to be experienced by a larger majority of students, medical practitioners and researchers. These techniques include QuickTimeVR2 (QTVR), VRML2, QuickDraw3D, OpenGL and Java3D. In order to test the usability of the different techniques, Mednet have initiated a number of projects designed to evaluate the potentials of 3D techniques for scientific reporting, clinical visualization and medical education. These include datasets created by manual tracing followed by triangulation, smoothing and 3D visualization, MRI or high-resolution laserscanning. Preliminary results indicate that both VRML and QTVR fulfills most of the requirements of web based, interactive 3D visualization, whereas QuickDraw3D is too limited. Presently, the JAVA 3D has not yet reached a level where in depth testing is possible. The use of high-resolution laserscanning is an important addition to 3D digitization.
Visual activity predicts auditory recovery from deafness after adult cochlear implantation.
Strelnikov, Kuzma; Rouger, Julien; Demonet, Jean-François; Lagleyre, Sebastien; Fraysse, Bernard; Deguine, Olivier; Barone, Pascal
2013-12-01
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.
Experimenter's laboratory for visualized interactive science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.
1992-01-01
The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.
Experimenter's laboratory for visualized interactive science
NASA Technical Reports Server (NTRS)
Hansen, Elaine R.; Klemp, Marjorie K.; Lasater, Sally W.; Szczur, Marti R.; Klemp, Joseph B.
1993-01-01
The science activities of the 1990's will require the analysis of complex phenomena and large diverse sets of data. In order to meet these needs, we must take advantage of advanced user interaction techniques: modern user interface tools; visualization capabilities; affordable, high performance graphics workstations; and interoperatable data standards and translator. To meet these needs, we propose to adopt and upgrade several existing tools and systems to create an experimenter's laboratory for visualized interactive science. Intuitive human-computer interaction techniques have already been developed and demonstrated at the University of Colorado. A Transportable Applications Executive (TAE+), developed at GSFC, is a powerful user interface tool for general purpose applications. A 3D visualization package developed by NCAR provides both color-shaded surface displays and volumetric rendering in either index or true color. The Network Common Data Form (NetCDF) data access library developed by Unidata supports creation, access and sharing of scientific data in a form that is self-describing and network transparent. The combination and enhancement of these packages constitutes a powerful experimenter's laboratory capable of meeting key science needs of the 1990's. This proposal encompasses the work required to build and demonstrate this capability.
Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain
NASA Technical Reports Server (NTRS)
Kao, David; Kramer, Marc; Chaderjian, Neal
2005-01-01
Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.
Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil
2014-03-01
Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI.
Using high-resolution displays for high-resolution cardiac data.
Goodyer, Christopher; Hodrien, John; Wood, Jason; Kohl, Peter; Brodlie, Ken
2009-07-13
The ability to perform fast, accurate, high-resolution visualization is fundamental to improving our understanding of anatomical data. As the volumes of data increase from improvements in scanning technology, the methods applied to visualization must evolve. In this paper, we address the interactive display of data from high-resolution magnetic resonance imaging scanning of a rabbit heart and subsequent histological imaging. We describe a visualization environment involving a tiled liquid crystal display panel display wall and associated software, which provides an interactive and intuitive user interface. The oView software is an OpenGL application that is written for the VR Juggler environment. This environment abstracts displays and devices away from the application itself, aiding portability between different systems, from desktop PCs to multi-tiled display walls. Portability between display walls has been demonstrated through its use on walls at the universities of both Leeds and Oxford. We discuss important factors to be considered for interactive two-dimensional display of large three-dimensional datasets, including the use of intuitive input devices and level of detail aspects.
NASA Astrophysics Data System (ADS)
Vollmer, Michael; Möllmann, Klaus-Peter
2012-09-01
We present two simple demonstration experiments recorded with high-speed cameras in the fields of gas dynamics and thermal physics. The experiments feature vapour pressure effects as well as adiabatic cooling observed upon opening a bottle of champagne.
Learning semantic and visual similarity for endomicroscopy video retrieval.
Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2012-06-01
Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin'ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap.
Audio-Visual Temporal Recalibration Can be Constrained by Content Cues Regardless of Spatial Overlap
Roseboom, Warrick; Kawabe, Takahiro; Nishida, Shin’Ya
2013-01-01
It has now been well established that the point of subjective synchrony for audio and visual events can be shifted following exposure to asynchronous audio-visual presentations, an effect often referred to as temporal recalibration. Recently it was further demonstrated that it is possible to concurrently maintain two such recalibrated estimates of audio-visual temporal synchrony. However, it remains unclear precisely what defines a given audio-visual pair such that it is possible to maintain a temporal relationship distinct from other pairs. It has been suggested that spatial separation of the different audio-visual pairs is necessary to achieve multiple distinct audio-visual synchrony estimates. Here we investigated if this is necessarily true. Specifically, we examined whether it is possible to obtain two distinct temporal recalibrations for stimuli that differed only in featural content. Using both complex (audio visual speech; see Experiment 1) and simple stimuli (high and low pitch audio matched with either vertically or horizontally oriented Gabors; see Experiment 2) we found concurrent, and opposite, recalibrations despite there being no spatial difference in presentation location at any point throughout the experiment. This result supports the notion that the content of an audio-visual pair alone can be used to constrain distinct audio-visual synchrony estimates regardless of spatial overlap. PMID:23658549
High-density diffuse optical tomography of term infant visual cortex in the nursery
NASA Astrophysics Data System (ADS)
Liao, Steve M.; Ferradal, Silvina L.; White, Brian R.; Gregg, Nicholas; Inder, Terrie E.; Culver, Joseph P.
2012-08-01
Advancements in antenatal and neonatal medicine over the last few decades have led to significant improvement in the survival rates of sick newborn infants. However, this improvement in survival has not been matched by a reduction in neurodevelopmental morbidities with increasing recognition of the diverse cognitive and behavioral challenges that preterm infants face in childhood. Conventional neuroimaging modalities, such as cranial ultrasound and magnetic resonance imaging, provide an important definition of neuroanatomy with recognition of brain injury. However, they fail to define the functional integrity of the immature brain, particularly during this critical developmental period. Diffuse optical tomography methods have established success in imaging adult brain function; however, few studies exist to demonstrate their feasibility in the neonatal population. We demonstrate the feasibility of using recently developed high-density diffuse optical tomography (HD-DOT) to map functional activation of the visual cortex in healthy term-born infants. The functional images show high contrast-to-noise ratio obtained in seven neonates. These results illustrate the potential for HD-DOT and provide a foundation for investigations of brain function in more vulnerable newborns, such as preterm infants.
An explainable deep machine vision framework for plant stress phenotyping.
Ghosal, Sambuddha; Blystone, David; Singh, Asheesh K; Ganapathysubramanian, Baskar; Singh, Arti; Sarkar, Soumik
2018-05-01
Current approaches for accurate identification, classification, and quantification of biotic and abiotic stresses in crop research and production are predominantly visual and require specialized training. However, such techniques are hindered by subjectivity resulting from inter- and intrarater cognitive variability. This translates to erroneous decisions and a significant waste of resources. Here, we demonstrate a machine learning framework's ability to identify and classify a diverse set of foliar stresses in soybean [ Glycine max (L.) Merr.] with remarkable accuracy. We also present an explanation mechanism, using the top-K high-resolution feature maps that isolate the visual symptoms used to make predictions. This unsupervised identification of visual symptoms provides a quantitative measure of stress severity, allowing for identification (type of foliar stress), classification (low, medium, or high stress), and quantification (stress severity) in a single framework without detailed symptom annotation by experts. We reliably identified and classified several biotic (bacterial and fungal diseases) and abiotic (chemical injury and nutrient deficiency) stresses by learning from over 25,000 images. The learned model is robust to input image perturbations, demonstrating viability for high-throughput deployment. We also noticed that the learned model appears to be agnostic to species, seemingly demonstrating an ability of transfer learning. The availability of an explainable model that can consistently, rapidly, and accurately identify and quantify foliar stresses would have significant implications in scientific research, plant breeding, and crop production. The trained model could be deployed in mobile platforms (e.g., unmanned air vehicles and automated ground scouts) for rapid, large-scale scouting or as a mobile application for real-time detection of stress by farmers and researchers. Copyright © 2018 the Author(s). Published by PNAS.
Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.
2014-01-01
Objective Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than three, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance DataHigh was developed to fulfill a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity. PMID:24216250
NASA Astrophysics Data System (ADS)
Cowley, Benjamin R.; Kaufman, Matthew T.; Butler, Zachary S.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.
2013-12-01
Objective. Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. Approach. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. Main results. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. Significance. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.
Cowley, Benjamin R; Kaufman, Matthew T; Butler, Zachary S; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M
2013-12-01
Analyzing and interpreting the activity of a heterogeneous population of neurons can be challenging, especially as the number of neurons, experimental trials, and experimental conditions increases. One approach is to extract a set of latent variables that succinctly captures the prominent co-fluctuation patterns across the neural population. A key problem is that the number of latent variables needed to adequately describe the population activity is often greater than 3, thereby preventing direct visualization of the latent space. By visualizing a small number of 2-d projections of the latent space or each latent variable individually, it is easy to miss salient features of the population activity. To address this limitation, we developed a Matlab graphical user interface (called DataHigh) that allows the user to quickly and smoothly navigate through a continuum of different 2-d projections of the latent space. We also implemented a suite of additional visualization tools (including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses) and an optional tool for performing dimensionality reduction. To demonstrate the utility and versatility of DataHigh, we used it to analyze single-trial spike count and single-trial timecourse population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded using single electrodes. DataHigh was developed to fulfil a need for visualization in exploratory neural data analysis, which can provide intuition that is critical for building scientific hypotheses and models of population activity.
Groen, Iris I A; Silson, Edward H; Baker, Chris I
2017-02-19
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).
2017-01-01
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013
NASA Astrophysics Data System (ADS)
Schiltz, Holly Kristine
Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors' modeled visualization artifacts had on students. No patterns emerged from the passive observation of visualization artifacts in lecture or recitation, but the need to elicit visual information from students was made clear. Deconstruction proved to be a valuable method for instruction and assessment of visual information. Three strategies for using deconstruction in teaching were distilled from the lessons and observations of the student focus groups: begin with observations of what is given in an image and what it's composed of, identify the relationships between components to find additional operations in different environments about the molecule, and deconstructing steps of challenging questions can reveal mistakes. An intervention was developed to teach students to use deconstruction and verbalization to analyze complex visualization tasks and employ the principles of the theoretical framework. The activities were scaffolded to introduce increasingly challenging concepts to students, but also support them as they learned visually demanding chemistry concepts. Several themes were observed in the analysis of the visualization activities. Students used deconstruction by documenting which parts of the images were useful for interpretation of the visual. Students identified valid patterns and rules within the images, which signified understanding of arrangement of information presented in the representation. Successful strategy communication was identified when students documented personal strategies that allowed them to complete the activity tasks. Finally, students demonstrated the ability to extend symmetry skills to advanced applications they had not previously seen. This work shows how the use of deconstruction and verbalization may have a great impact on how students master difficult topics and combined, they offer students a powerful strategy to approach visually demanding chemistry problems and to the instructor a unique insight to mentally constructed strategies.
A system verification platform for high-density epiretinal prostheses.
Chen, Kuanfu; Lo, Yi-Kai; Yang, Zhi; Weiland, James D; Humayun, Mark S; Liu, Wentai
2013-06-01
Retinal prostheses have restored light perception to people worldwide who have poor or no vision as a consequence of retinal degeneration. To advance the quality of visual stimulation for retinal implant recipients, a higher number of stimulation channels is expected in the next generation retinal prostheses, which poses a great challenge to system design and verification. This paper presents a system verification platform dedicated to the development of retinal prostheses. The system includes primary processing, dual-band power and data telemetry, a high-density stimulator array, and two methods for output verification. End-to-end system validation and individual functional block characterization can be achieved with this platform through visual inspection and software analysis. Custom-built software running on the computers also provides a good way for testing new features before they are realized by the ICs. Real-time visual feedbacks through the video displays make it easy to monitor and debug the system. The characterization of the wireless telemetry and the demonstration of the visual display are reported in this paper using a 256-channel retinal prosthetic IC as an example.
NASA Astrophysics Data System (ADS)
Sztrókay, A.; Diemoz, P. C.; Schlossbauer, T.; Brun, E.; Bamberg, F.; Mayr, D.; Reiser, M. F.; Bravin, A.; Coan, P.
2012-05-01
Previous studies on phase contrast imaging (PCI) mammography have demonstrated an enhancement of breast morphology and cancerous tissue visualization compared to conventional imaging. We show here the first results of the PCI analyser-based imaging (ABI) in computed tomography (CT) mode on whole and large (>12 cm) tumour-bearing breast tissues. We demonstrate in this work the capability of the technique of working at high x-ray energies and producing high-contrast images of large and complex specimens. One entire breast of an 80-year-old woman with invasive ductal cancer was imaged using ABI-CT with monochromatic 70 keV x-rays and an area detector of 92×92 µm2 pixel size. Sagittal slices were reconstructed from the acquired data, and compared to corresponding histological sections. Comparison with conventional absorption-based CT was also performed. Five blinded radiologists quantitatively evaluated the visual aspects of the ABI-CT images with respect to sharpness, soft tissue contrast, tissue boundaries and the discrimination of different structures/tissues. ABI-CT excellently depicted the entire 3D architecture of the breast volume by providing high-resolution and high-contrast images of the normal and cancerous breast tissues. These results are an important step in the evolution of PCI-CT towards its clinical implementation.
Landa, Rebecca J.; Haworth, Joshua L.; Nebel, Mary Beth
2016-01-01
Children with autism spectrum disorder (ASD) demonstrate a host of motor impairments that may share a common developmental basis with ASD core symptoms. School-age children with ASD exhibit particular difficulty with hand-eye coordination and appear to be less sensitive to visual feedback during motor learning. Sensorimotor deficits are observable as early as 6 months of age in children who later develop ASD; yet the interplay of early motor, visual and social skill development in ASD is not well understood. Integration of visual input with motor output is vital for the formation of internal models of action. Such integration is necessary not only to master a wide range of motor skills, but also to imitate and interpret the actions of others. Thus, closer examination of the early development of visual-motor deficits is of critical importance to ASD. In the present study of infants at high risk (HR) and low risk (LR) for ASD, we examined visual-motor coupling, or action anticipation, during a dynamic, interactive ball-rolling activity. We hypothesized that, compared to LR infants, HR infants would display decreased anticipatory response (perception-guided predictive action) to the approaching ball. We also examined visual attention before and during ball rolling to determine whether attention engagement contributed to differences in anticipation. Results showed that LR and HR infants demonstrated context appropriate looking behavior, both before and during the ball’s trajectory toward them. However, HR infants were less likely to exhibit context appropriate anticipatory motor response to the approaching ball (moving their arm/hand to intercept the ball) than LR infants. This finding did not appear to be driven by differences in motor skill between risk groups at 6 months of age and was extended to show an atypical predictive relationship between anticipatory behavior at 6 months and preference for looking at faces compared to objects at age 14 months in the HR group. PMID:27252667
Tunnel vision: sharper gradient of spatial attention in autism.
Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I
2013-04-17
Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.
StreamExplorer: A Multi-Stage System for Visually Exploring Events in Social Streams.
Wu, Yingcai; Chen, Zhutian; Sun, Guodao; Xie, Xiao; Cao, Nan; Liu, Shixia; Cui, Weiwei
2017-10-18
Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.
Memory of Gender and Gait Direction from Biological Motion: Gender Fades Away but Directions Stay
ERIC Educational Resources Information Center
Poom, Leo
2012-01-01
The delayed discrimination methodology has been used to demonstrate a high-fidelity nondecaying visual short-term memory (VSTM) for so-called preattentive basic features. In the current Study, I show that the nondecaying high VSTM precision is not restricted to basic features by using the same method to measure memory precision for gait direction…
Inviscid Limit for Damped and Driven Incompressible Navier-Stokes Equations in mathbb R^2
NASA Astrophysics Data System (ADS)
Ramanah, D.; Raghunath, S.; Mee, D. J.; Rösgen, T.; Jacobs, P. A.
2007-08-01
Experiments to demonstrate the use of the background-oriented schlieren (BOS) technique in hypersonic impulse facilities are reported. BOS uses a simple optical set-up consisting of a structured background pattern, an electronic camera with a high shutter speed and a high intensity light source. The visualization technique is demonstrated in a small reflected shock tunnel with a Mach 4 conical nozzle, nozzle supply pressure of 2.2 MPa and nozzle supply enthalpy of 1.8 MJ/kg. A 20° sharp circular cone and a model of the MUSES-C re-entry body were tested. Images captured were processed using PIV-style image analysis to visualize variations in the density field. The shock angle on the cone measured from the BOS images agreed with theoretical calculations to within 0.5°. Shock standoff distances could be measured from the BOS image for the re-entry body. Preliminary experiments are also reported in higher enthalpy facilities where flow luminosity can interfere with imaging of the background pattern.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc
The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less
NEUROSARCOIDOSIS MASQUERADING AS A CENTRAL NERVOUS SYSTEM TUMOR.
Elia, Maxwell; Kombo, Ninani; Huang, John
2017-01-01
To report a case of neurosarcoidosis with an isolated brain lesion mimicking a low-grade glioma. A 38-year-old woman presented with 2 weeks of blurry vision in the left eye. Ophthalmic examination, visual field testing, fluorescein angiography, laboratory testing, and MRI of the brain were performed. Ophthalmic examination revealed left-sided optic nerve infiltration, and MRI of the brain demonstrated a solitary lesion in the brain. The visual symptoms and ophthalmic examination improved significantly with initiation of high-dose oral prednisone. Because the MRI appearance was concerning for malignancy, a brain biopsy was performed. Pathology demonstrated gliosis consistent with a low-grade central nervous system (CNS) glioma. One year later, after initial loss to ophthalmic follow-up, the right optic nerve became involved, and the patient was again treated successfully for presumed ocular sarcoidosis. At this time, serial neuroimaging demonstrated enlargement of the CNS lesion, prompting rebiopsy. Rebiopsy demonstrated a noncaseating granuloma, confirming the diagnosis of neurosarcoidosis. The patient was treated with 20 mg of methotrexate weekly and a prednisone taper with improvement in visual and neurologic symptoms. The authors present an unusual case of neurosarcoidosis masquerading as a CNS glioma. In cases of solitary CNS granulomas, radiographically differentiating neurosarcoidosis from a glioma can be challenging. In this case, serial ophthalmic examination identifying sequential involvement of both optic nerves helped to identify the underlying cause of the CNS disease as sarcoidosis.
Background-Oriented Schlieren for Large-Scale and High-Speed Aerodynamic Phenomena
NASA Technical Reports Server (NTRS)
Mizukaki, Toshiharu; Borg, Stephen; Danehy, Paul M.; Murman, Scott M.; Matsumura, Tomoharu; Wakabayashi, Kunihiko; Nakayama, Yoshio
2015-01-01
Visualization of the flow field around a generic re-entry capsule in subsonic flow and shock wave visualization with cylindrical explosives have been conducted to demonstrate sensitivity and applicability of background-oriented schlieren (BOS) for field experiments. The wind tunnel experiment suggests that BOS with a fine-pixel imaging device has a density change detection sensitivity on the order of 10(sup -5) in subsonic flow. In a laboratory setup, the structure of the shock waves generated by explosives have been successfully reconstructed by a computed tomography method combined with BOS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco
High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less
A simple and rapid method for high-resolution visualization of single-ion tracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omichi, Masaaki; Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017; Choi, Wookjin
2014-11-15
Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic forcemore » microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.« less
Comprehensive decision tree models in bioinformatics.
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics.
Comprehensive Decision Tree Models in Bioinformatics
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
2012-01-01
Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics. PMID:22479449
Gaze shifts during dual-tasking stair descent.
Miyasike-daSilva, Veronica; McIlroy, William E
2016-11-01
To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.
Integrated Data Visualization and Virtual Reality Tool
NASA Technical Reports Server (NTRS)
Dryer, David A.
1998-01-01
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
Vatakis, Argiro; Maragos, Petros; Rodomagoulakis, Isidoros; Spence, Charles
2012-01-01
We investigated how the physical differences associated with the articulation of speech affect the temporal aspects of audiovisual speech perception. Video clips of consonants and vowels uttered by three different speakers were presented. The video clips were analyzed using an auditory-visual signal saliency model in order to compare signal saliency and behavioral data. Participants made temporal order judgments (TOJs) regarding which speech-stream (auditory or visual) had been presented first. The sensitivity of participants' TOJs and the point of subjective simultaneity (PSS) were analyzed as a function of the place, manner of articulation, and voicing for consonants, and the height/backness of the tongue and lip-roundedness for vowels. We expected that in the case of the place of articulation and roundedness, where the visual-speech signal is more salient, temporal perception of speech would be modulated by the visual-speech signal. No such effect was expected for the manner of articulation or height. The results demonstrate that for place and manner of articulation, participants' temporal percept was affected (although not always significantly) by highly-salient speech-signals with the visual-signals requiring smaller visual-leads at the PSS. This was not the case when height was evaluated. These findings suggest that in the case of audiovisual speech perception, a highly salient visual-speech signal may lead to higher probabilities regarding the identity of the auditory-signal that modulate the temporal window of multisensory integration of the speech-stimulus. PMID:23060756
Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory
ERIC Educational Resources Information Center
Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong
2011-01-01
Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Yi; Jiang, Huaiguang; Zhang, Yingchen
In this paper, a big data visualization platform is designed to discover the hidden useful knowledge for smart grid (SG) operation, control and situation awareness. The spawn of smart sensors at both grid side and customer side can provide large volume of heterogeneous data that collect information in all time spectrums. Extracting useful knowledge from this big-data poll is still challenging. In this paper, the Apache Spark, an open source cluster computing framework, is used to process the big-data to effectively discover the hidden knowledge. A high-speed communication architecture utilizing the Open System Interconnection (OSI) model is designed to transmitmore » the data to a visualization platform. This visualization platform uses Google Earth, a global geographic information system (GIS) to link the geological information with the SG knowledge and visualize the information in user defined fashion. The University of Denver's campus grid is used as a SG test bench and several demonstrations are presented for the proposed platform.« less
Developmental dyslexia and vision
Quercia, Patrick; Feiss, Léonard; Michel, Carine
2013-01-01
Developmental dyslexia affects almost 10% of school-aged children and represents a significant public health problem. Its etiology is unknown. The consistent presence of phonological difficulties combined with an inability to manipulate language sounds and the grapheme–phoneme conversion is widely acknowledged. Numerous scientific studies have also documented the presence of eye movement anomalies and deficits of perception of low contrast, low spatial frequency, and high frequency temporal visual information in dyslexics. Anomalies of visual attention with short visual attention spans have also been demonstrated in a large number of cases. Spatial orientation is also affected in dyslexics who manifest a preference for spatial attention to the right. This asymmetry may be so pronounced that it leads to a veritable neglect of space on the left side. The evaluation of treatments proposed to dyslexics whether speech or oriented towards the visual anomalies remains fragmentary. The advent of new explanatory theories, notably cerebellar, magnocellular, or proprioceptive, is an incentive for ophthalmologists to enter the world of multimodal cognition given the importance of the eye’s visual input. PMID:23690677
MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy
2016-11-01
We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.
Shi, Yue; Queener, Hope M.; Marsack, Jason D.; Ravikumar, Ayeswarya; Bedell, Harold E.; Applegate, Raymond A.
2013-01-01
Dynamic registration uncertainty of a wavefront-guided correction with respect to underlying wavefront error (WFE) inevitably decreases retinal image quality. A partial correction may improve average retinal image quality and visual acuity in the presence of registration uncertainties. The purpose of this paper is to (a) develop an algorithm to optimize wavefront-guided correction that improves visual acuity given registration uncertainty and (b) test the hypothesis that these corrections provide improved visual performance in the presence of these uncertainties as compared to a full-magnitude correction or a correction by Guirao, Cox, and Williams (2002). A stochastic parallel gradient descent (SPGD) algorithm was used to optimize the partial-magnitude correction for three keratoconic eyes based on measured scleral contact lens movement. Given its high correlation with logMAR acuity, the retinal image quality metric log visual Strehl was used as a predictor of visual acuity. Predicted values of visual acuity with the optimized corrections were validated by regressing measured acuity loss against predicted loss. Measured loss was obtained from normal subjects viewing acuity charts that were degraded by the residual aberrations generated by the movement of the full-magnitude correction, the correction by Guirao, and optimized SPGD correction. Partial-magnitude corrections optimized with an SPGD algorithm provide at least one line improvement of average visual acuity over the full magnitude and the correction by Guirao given the registration uncertainty. This study demonstrates that it is possible to improve the average visual acuity by optimizing wavefront-guided correction in the presence of registration uncertainty. PMID:23757512
Crossmodal association of auditory and visual material properties in infants.
Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K
2018-06-18
The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.
A Study of Visualization for Mathematics Education
NASA Technical Reports Server (NTRS)
Daugherty, Sarah C.
2008-01-01
Graphical representations such as figures, illustrations, and diagrams play a critical role in mathematics and they are equally important in mathematics education. However, graphical representations in mathematics textbooks are static, Le. they are used to illustrate only a specific example or a limited set. of examples. By using computer software to visualize mathematical principles, virtually there is no limit to the number of specific cases and examples that can be demonstrated. However, we have not seen widespread adoption of visualization software in mathematics education. There are currently a number of software packages that provide visualization of mathematics for research and also software packages specifically developed for mathematics education. We conducted a survey of mathematics visualization software packages, summarized their features and user bases, and analyzed their limitations. In this survey, we focused on evaluating the software packages for their use with mathematical subjects adopted by institutions of secondary education in the United States (middle schools and high schools), including algebra, geometry, trigonometry, and calculus. We found that cost, complexity, and lack of flexibility are the major factors that hinder the widespread use of mathematics visualization software in education.
Extraction of skin-friction fields from surface flow visualizations as an inverse problem
NASA Astrophysics Data System (ADS)
Liu, Tianshu
2013-12-01
Extraction of high-resolution skin-friction fields from surface flow visualization images as an inverse problem is discussed from a unified perspective. The surface flow visualizations used in this study are luminescent oil-film visualization and heat-transfer and mass-transfer visualizations with temperature- and pressure-sensitive paints (TSPs and PSPs). The theoretical foundations of these global methods are the thin-oil-film equation and the limiting forms of the energy- and mass-transport equations at a wall, which are projected onto the image plane to provide the relationships between a skin-friction field and the relevant quantities measured by using an imaging system. Since these equations can be re-cast in the same mathematical form as the optical flow equation, they can be solved by using the variational method in the image plane to extract relative or normalized skin-friction fields from images. Furthermore, in terms of instrumentation, essentially the same imaging system for measurements of luminescence can be used in these surface flow visualizations. Examples are given to demonstrate the applications of these methods in global skin-friction diagnostics of complex flows.
Ehlers, Justis P.; Tao, Yuankai K.; Farsiu, Sina; Maldonado, Ramiro; Izatt, Joseph A.
2011-01-01
Purpose. To demonstrate an operating microscope-mounted spectral domain optical coherence tomography (MMOCT) system for human retinal and model surgery imaging. Methods. A prototype MMOCT system was developed to interface directly with an ophthalmic surgical microscope, to allow SDOCT imaging during surgical viewing. Nonoperative MMOCT imaging was performed in an Institutional Review Board–approved protocol in four healthy volunteers. The effect of surgical instrument materials on MMOCT imaging was evaluated while performing retinal surface, intraretinal, and subretinal maneuvers in cadaveric porcine eyes. The instruments included forceps, metallic and polyamide subretinal needles, and soft silicone-tipped instruments, with and without diamond dusting. Results. High-resolution images of the human retina were successfully obtained with the MMOCT system. The optical properties of surgical instruments affected the visualization of the instrument and the underlying retina. Metallic instruments (e.g., forceps and needles) showed high reflectivity with total shadowing below the instrument. Polyamide material had a moderate reflectivity with subtotal shadowing. Silicone instrumentation showed moderate reflectivity with minimal shadowing. Summed voxel projection MMOCT images provided clear visualization of the instruments, whereas the B-scans from the volume revealed details of the interactions between the tissues and the instrumentation (e.g., subretinal space cannulation, retinal elevation, or retinal holes). Conclusions. High-quality retinal imaging is feasible with an MMOCT system. Intraoperative imaging with model eyes provides high-resolution depth information including visualization of the instrument and intraoperative tissue manipulation. This study demonstrates a key component of an interactive platform that could provide enhanced information for the vitreoretinal surgeon. PMID:21282565
Ehlers, Justis P; Tao, Yuankai K; Farsiu, Sina; Maldonado, Ramiro; Izatt, Joseph A; Toth, Cynthia A
2011-05-16
To demonstrate an operating microscope-mounted spectral domain optical coherence tomography (MMOCT) system for human retinal and model surgery imaging. A prototype MMOCT system was developed to interface directly with an ophthalmic surgical microscope, to allow SDOCT imaging during surgical viewing. Nonoperative MMOCT imaging was performed in an Institutional Review Board-approved protocol in four healthy volunteers. The effect of surgical instrument materials on MMOCT imaging was evaluated while performing retinal surface, intraretinal, and subretinal maneuvers in cadaveric porcine eyes. The instruments included forceps, metallic and polyamide subretinal needles, and soft silicone-tipped instruments, with and without diamond dusting. High-resolution images of the human retina were successfully obtained with the MMOCT system. The optical properties of surgical instruments affected the visualization of the instrument and the underlying retina. Metallic instruments (e.g., forceps and needles) showed high reflectivity with total shadowing below the instrument. Polyamide material had a moderate reflectivity with subtotal shadowing. Silicone instrumentation showed moderate reflectivity with minimal shadowing. Summed voxel projection MMOCT images provided clear visualization of the instruments, whereas the B-scans from the volume revealed details of the interactions between the tissues and the instrumentation (e.g., subretinal space cannulation, retinal elevation, or retinal holes). High-quality retinal imaging is feasible with an MMOCT system. Intraoperative imaging with model eyes provides high-resolution depth information including visualization of the instrument and intraoperative tissue manipulation. This study demonstrates a key component of an interactive platform that could provide enhanced information for the vitreoretinal surgeon.
Visual phenomena induced by cosmic rays and accelerated particles
NASA Technical Reports Server (NTRS)
Tobias, C. A.; Budinger, T. F.; Leith, J. T.; Mamoon, A.; Chapman, P. K.
1972-01-01
Experiments, conducted at cyclotrons together with observations by Apollo astronauts, suggest with little doubt that cosmic nuclei interacting with the visual apparatus cause the phenomenon of light flashes seen on translunar and transearth coast over the past four Apollo missions. Other experiments with high and low energy neutrons and a helium ion beam suggest that slow protons and helium ions with a stopping power greater than 10 to the 8th power eV/gram sq cm can cause the phenomenon in the dark adapted eye. It was demonstrated that charged particles induced by neutrons and helium ions can stimulate the visual apparatus. Some approaches to understanding the long term mission effects of galactic cosmic nuclei interacting with man and his nervous system are outlined.
Iontophoresis and Flame Photometry: A Hybrid Interdisciplinary Experiment
ERIC Educational Resources Information Center
Sharp, Duncan; Cottam, Linzi; Bradley, Sarah; Brannigan, Jeanie; Davis, James
2010-01-01
The combination of reverse iontophoresis and flame photometry provides an engaging analytical experiment that gives first-year undergraduate students a flavor of modern drug delivery and analyte extraction techniques while reinforcing core analytical concepts. The experiment provides a highly visual demonstration of the iontophoresis technique and…
The incomplete angler: effects created by visual omission.
Findlay, John M
2008-01-01
The Fisherman, a picture painted by Jean-Louis Forain, demonstrates an interesting interaction between low and high level perceptual processing. The isolation and tranquillity of the fisherman in the picture are enhanced by the absence of his reflection, yet perceivers are rarely aware of the omission.
De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T
2012-02-08
Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.
Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.
2017-01-01
The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924
Jorge, João; Figueiredo, Patrícia; Gruetter, Rolf; van der Zwaag, Wietske
2018-06-01
External stimuli and tasks often elicit negative BOLD responses in various brain regions, and growing experimental evidence supports that these phenomena are functionally meaningful. In this work, the high sensitivity available at 7T was explored to map and characterize both positive (PBRs) and negative BOLD responses (NBRs) to visual checkerboard stimulation, occurring in various brain regions within and beyond the visual cortex. Recently-proposed accelerated fMRI techniques were employed for data acquisition, and procedures for exclusion of large draining vein contributions, together with ICA-assisted denoising, were included in the analysis to improve response estimation. Besides the visual cortex, significant PBRs were found in the lateral geniculate nucleus and superior colliculus, as well as the pre-central sulcus; in these regions, response durations increased monotonically with stimulus duration, in tight covariation with the visual PBR duration. Significant NBRs were found in the visual cortex, auditory cortex, default-mode network (DMN) and superior parietal lobule; NBR durations also tended to increase with stimulus duration, but were significantly less sustained than the visual PBR, especially for the DMN and superior parietal lobule. Responses in visual and auditory cortex were further studied for checkerboard contrast dependence, and their amplitudes were found to increase monotonically with contrast, linearly correlated with the visual PBR amplitude. Overall, these findings suggest the presence of dynamic neuronal interactions across multiple brain regions, sensitive to stimulus intensity and duration, and demonstrate the richness of information obtainable when jointly mapping positive and negative BOLD responses at a whole-brain scale, with ultra-high field fMRI. © 2018 Wiley Periodicals, Inc.
Parker, Jason G; Zalusky, Eric J; Kirbas, Cemil
2014-01-01
Background Accurate mapping of visual function and selective attention using fMRI is important in the study of human performance as well as in presurgical treatment planning of lesions in or near visual centers of the brain. Conjunctive visual search (CVS) is a useful tool for mapping visual function during fMRI because of its greater activation extent compared with high-capacity parallel search processes. Aims The purpose of this work was to develop and evaluate a CVS that was capable of generating consistent activation in the basic and higher level visual areas of the brain by using a high number of distractors as well as an optimized contrast condition. Materials and methods Images from 10 healthy volunteers were analyzed and brain regions of greatest activation and deactivation were determined using a nonbiased decomposition of the results at the hemisphere, lobe, and gyrus levels. The results were quantified in terms of activation and deactivation extent and mean z-statistic. Results The proposed CVS was found to generate robust activation of the occipital lobe, as well as regions in the middle frontal gyrus associated with coordinating eye movements and in regions of the insula associated with task-level control and focal attention. As expected, the task demonstrated deactivation patterns commonly implicated in the default-mode network. Further deactivation was noted in the posterior region of the cerebellum, most likely associated with the formation of optimal search strategy. Conclusion We believe the task will be useful in studies of visual and selective attention in the neuroscience community as well as in mapping visual function in clinical fMRI. PMID:24683515
Sadananda, Monika; Bischof, Hans-Joachim
2006-10-16
Two forebrain areas in the hyperpallium apicale and in the lateral nidopallium of isolated male zebra finches are highly active (2-deoxyglucose technique) on exposure to females for the first time, that is first courtship. These areas also demonstrate enhanced neuronal plasticity when screened with c-fos immunocytochemistry. Both are areas involved in the processing of visual information conveyed by the two major visual pathways in birds, strengthening our hypothesis that courtship in the zebra finch is a visually guided behaviour. First courtship and chased birds show enhanced c-fos induction in the hyperpallial area, which could represent neuronal activity reflecting changes in the immediate environment. The enhanced expression of fos in first courtship birds in lateral nidopallial neurons indicates imminent long-lasting changes at the synaptic level that form the substrate for imprinting, a stable form of learning in birds.
A visual servo-based teleoperation robot system for closed diaphyseal fracture reduction.
Li, Changsheng; Wang, Tianmiao; Hu, Lei; Zhang, Lihai; Du, Hailong; Zhao, Lu; Wang, Lifeng; Tang, Peifu
2015-09-01
Common fracture treatments include open reduction and intramedullary nailing technology. However, these methods have disadvantages such as intraoperative X-ray radiation, delayed union or nonunion and postoperative rotation. Robots provide a novel solution to the aforementioned problems while posing new challenges. Against this scientific background, we develop a visual servo-based teleoperation robot system. In this article, we present a robot system, analyze the visual servo-based control system in detail and develop path planning for fracture reduction, inverse kinematics, and output forces of the reduction mechanism. A series of experimental tests is conducted on a bone model and an animal bone. The experimental results demonstrate the feasibility of the robot system. The robot system uses preoperative computed tomography data to realize high precision and perform minimally invasive teleoperation for fracture reduction via the visual servo-based control system while protecting surgeons from radiation. © IMechE 2015.
Brisson, Benoit; Leblanc, Emilie; Jolicoeur, Pierre
2009-02-01
It has recently been demonstrated that a lateralized distractor that matches the individual's top-down control settings elicits an N2pc wave, an electrophysiological index of the focus of visual-spatial attention, indicating that contingent capture has a visual-spatial locus. Here, we investigated whether contingent capture required capacity-limited central resources by incorporating a contingent capture task as the second task of a psychological refractory period (PRP) dual-task paradigm. The N2pc was used to monitor where observers were attending while they performed concurrent central processing known to cause the PRP effect. The N2pc elicited by the lateralized distractor that matched the top-down control settings was attenuated in high concurrent central load conditions, indicating that although involuntary, the deployment of visual-spatial attention occurring during contingent capture depends on capacity-limited central resources.
VIPER: Virtual Intelligent Planetary Exploration Rover
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Flueckiger, Lorenzo; Nguyen, Laurent; Washington, Richard
2001-01-01
Simulation and visualization of rover behavior are critical capabilities for scientists and rover operators to construct, test, and validate plans for commanding a remote rover. The VIPER system links these capabilities. using a high-fidelity virtual-reality (VR) environment. a kinematically accurate simulator, and a flexible plan executive to allow users to simulate and visualize possible execution outcomes of a plan under development. This work is part of a larger vision of a science-centered rover control environment, where a scientist may inspect and explore the environment via VR tools, specify science goals, and visualize the expected and actual behavior of the remote rover. The VIPER system is constructed from three generic systems, linked together via a minimal amount of customization into the integrated system. The complete system points out the power of combining plan execution, simulation, and visualization for envisioning rover behavior; it also demonstrates the utility of developing generic technologies. which can be combined in novel and useful ways.
Cronly-Dillon, J; Persaud, K; Gregory, R P
1999-01-01
This study demonstrates the ability of blind (previously sighted) and blindfolded (sighted) subjects in reconstructing and identifying a number of visual targets transformed into equivalent musical representations. Visual images are deconstructed through a process which selectively segregates different features of the image into separate packages. These are then encoded in sound and presented as a polyphonic musical melody which resembles a Baroque fugue with many voices, allowing subjects to analyse the component voices selectively in combination, or separately in sequence, in a manner which allows a subject to patch together and bind the different features of the object mentally into a mental percept of a single recognizable entity. The visual targets used in this study included a variety of geometrical figures, simple high-contrast line drawings of man-made objects, natural and urban scenes, etc., translated into sound and presented to the subject in polyphonic musical form. PMID:10643086
Scientific Visualization to Study Flux Transfer Events at the Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Rastatter, Lutz; Kuznetsova, Maria M.; Sibeck, David G.; Berrios, David H.
2011-01-01
In this paper we present results of modeling of reconnection at the dayside magnetopause with subsequent development of flux transfer event signatures. The tools used include new methods that have been added to the suite of visualization methods that are used at the Community Coordinated Modeling Center (CCMC). Flux transfer events result from localized reconnection that connect magnetosheath magnetic field and plasma with magnetospheric fields and plasma and results in flux rope structures that span the dayside magnetopause. The onset of flux rope formation and the three-dimensional structure of flux ropes are studied as they have been modeled by high-resolution magnetohydrodynamic simulations of the dayside magnetosphere of the Earth. We show that flux transfer events are complex three-dimensional structures that require modern visualization and analysis techniques. Two suites of visualization methods are presented and we demonstrate the usefulness of those methods through the CCMC web site to the general science user.
Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results
NASA Technical Reports Server (NTRS)
Aragon, Cecilia R.; Long, Kurtis R.
2005-01-01
Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Addition of visual noise boosts evoked potential-based brain-computer interface.
Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili
2014-05-14
Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.
Flow, affect and visual creativity.
Cseh, Genevieve M; Phillips, Louise H; Pearson, David G
2015-01-01
Flow (being in the zone) is purported to have positive consequences in terms of affect and performance; however, there is no empirical evidence about these links in visual creativity. Positive affect often--but inconsistently--facilitates creativity, and both may be linked to experiencing flow. This study aimed to determine relationships between these variables within visual creativity. Participants performed the creative mental synthesis task to simulate the creative process. Affect change (pre- vs. post-task) and flow were measured via questionnaires. The creativity of synthesis drawings was rated objectively and subjectively by judges. Findings empirically demonstrate that flow is related to affect improvement during visual creativity. Affect change was linked to productivity and self-rated creativity, but no other objective or subjective performance measures. Flow was unrelated to all external performance measures but was highly correlated with self-rated creativity; flow may therefore motivate perseverance towards eventual excellence rather than provide direct cognitive enhancement.
NASA Astrophysics Data System (ADS)
Smuga-Otto, M. J.; Garcia, R. K.; Knuteson, R. O.; Martin, G. D.; Flynn, B. M.; Hackel, D.
2006-12-01
The University of Wisconsin-Madison Space Science and Engineering Center (UW-SSEC) is developing tools to help scientists realize the potential of high spectral resolution instruments for atmospheric science. Upcoming satellite spectrometers like the Cross-track Infrared Sounder (CrIS), experimental instruments like the Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and proposed instruments like the Hyperspectral Environmental Suite (HES) within the GOES-R project will present a challenge in the form of the overwhelmingly large amounts of continuously generated data. Current and near-future workstations will have neither the storage space nor computational capacity to cope with raw spectral data spanning more than a few minutes of observations from these instruments. Schemes exist for processing raw data from hyperspectral instruments currently in testing, that involve distributed computation across clusters. Data, which for an instrument like GIFTS can amount to over 1.5 Terabytes per day, is carefully managed on Storage Area Networks (SANs), with attention paid to proper maintenance of associated metadata. The UW-SSEC is preparing a demonstration integrating these back-end capabilities as part of a larger visualization framework, to assist scientists in developing new products from high spectral data, sourcing data volumes they could not otherwise manage. This demonstration focuses on managing storage so that only the data specifically needed for the desired product are pulled from the SAN, and on running computationally expensive intermediate processing on a back-end cluster, with the final product being sent to a visualization system on the scientist's workstation. Where possible, existing software and solutions are used to reduce cost of development. The heart of the computing component is the GIFTS Information Processing System (GIPS), developed at the UW- SSEC to allow distribution of processing tasks such as conversion of raw GIFTS interferograms into calibrated radiance spectra, and retrieving temperature and water vapor content atmospheric profiles from these spectra. The hope is that by demonstrating the capabilities afforded by a composite system like the one described here, scientists can be convinced to contribute further algorithms in support of this model of computing and visualization.
Audibility and visual biasing in speech perception
NASA Astrophysics Data System (ADS)
Clement, Bart Richard
Although speech perception has been considered a predominantly auditory phenomenon, large benefits from vision in degraded acoustic conditions suggest integration of audition and vision. More direct evidence of this comes from studies of audiovisual disparity that demonstrate vision can bias and even dominate perception (McGurk & MacDonald, 1976). It has been observed that hearing-impaired listeners demonstrate more visual biasing than normally hearing listeners (Walden et al., 1990). It is argued here that stimulus audibility must be equated across groups before true differences can be established. In the present investigation, effects of visual biasing on perception were examined as audibility was degraded for 12 young normally hearing listeners. Biasing was determined by quantifying the degree to which listener identification functions for a single synthetic auditory /ba-da-ga/ continuum changed across two conditions: (1)an auditory-only listening condition; and (2)an auditory-visual condition in which every item of the continuum was synchronized with visual articulations of the consonant-vowel (CV) tokens /ba/ and /ga/, as spoken by each of two talkers. Audibility was altered by presenting the conditions in quiet and in noise at each of three signal-to- noise (S/N) ratios. For the visual-/ba/ context, large effects of audibility were found. As audibility decreased, visual biasing increased. A large talker effect also was found, with one talker eliciting more biasing than the other. An independent lipreading measure demonstrated that this talker was more visually intelligible than the other. For the visual-/ga/ context, audibility and talker effects were less robust, possibly obscured by strong listener effects, which were characterized by marked differences in perceptual processing patterns among participants. Some demonstrated substantial biasing whereas others demonstrated little, indicating a strong reliance on audition even in severely degraded acoustic conditions. Listener effects were not correlated with lipreading performance. The large effect of audibility suggests that conclusions regarding an increased reliance on vision among hearing- impaired listeners were premature, and that accurate comparisons only can be made after equating audibility. Further, if after such control, individual hearing- impaired listeners demonstrate the processing differences that were demonstrated in the present investigation, then these findings have the potential to impact aural rehabilitation strategies.
A visual analysis of multi-attribute data using pixel matrix displays
NASA Astrophysics Data System (ADS)
Hao, Ming C.; Dayal, Umeshwar; Keim, Daniel; Schreck, Tobias
2007-01-01
Charts and tables are commonly used to visually analyze data. These graphics are simple and easy to understand, but charts show only highly aggregated data and present only a limited number of data values while tables often show too many data values. As a consequence, these graphics may either lose or obscure important information, so different techniques are required to monitor complex datasets. Users need more powerful visualization techniques to digest and compare detailed multi-attribute data to analyze the health of their business. This paper proposes an innovative solution based on the use of pixel-matrix displays to represent transaction-level information. With pixelmatrices, users can visualize areas of importance at a glance, a capability not provided by common charting techniques. We present our solutions to use colored pixel-matrices in (1) charts for visualizing data patterns and discovering exceptions, (2) tables for visualizing correlations and finding root-causes, and (3) time series for visualizing the evolution of long-running transactions. The solutions have been applied with success to product sales, Internet network performance analysis, and service contract applications demonstrating the benefits of our method over conventional graphics. The method is especially useful when detailed information is a key part of the analysis.
Semantics by analogy for illustrative volume visualization☆
Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard
2012-01-01
We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827
Visual Saliency Detection Based on Multiscale Deep CNN Features.
Guanbin Li; Yizhou Yu
2016-11-01
Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. The penultimate layer of our neural network has been confirmed to be a discriminative high-level feature vector for saliency detection, which we call deep contrast feature. To generate a more robust feature, we integrate handcrafted low-level features with our deep contrast feature. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving the state-of-the-art performance on all public benchmarks, improving the F-measure by 6.12% and 10%, respectively, on the DUT-OMRON data set and our new data set (HKU-IS), and lowering the mean absolute error by 9% and 35.3%, respectively, on these two data sets.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Developmental Trajectory of McGurk Effect Susceptibility in Children and Adults With Amblyopia.
Narinesingh, Cindy; Goltz, Herbert C; Raashid, Rana Arham; Wong, Agnes M F
2015-03-05
The McGurk effect is an audiovisual illusion that involves the concurrent presentation of a phoneme (auditory syllable) and an incongruent viseme (visual syllable). Adults with amblyopia show less susceptibility to this illusion than visually normal controls, even when viewing binocularly. The present study investigated the developmental trajectory of McGurk effect susceptibility in adults, older children (10-17 years), and younger children (4-9 years) with amblyopia. A total of 62 participants with amblyopia (22 adults, 12 older children, 28 younger children) and 66 visually normal controls (25 adults, 17 older children, 24 younger children) viewed videos that combined phonemes and visemes, and were asked to report what they heard. Videos with congruent (auditory and visual matching) and incongruent (auditory and visual not matching) stimuli were presented. Incorrect responses on incongruent trials correspond to high McGurk effect susceptibility, indicating that the viseme influenced the phoneme. Participants with amblyopia (28.0% ± 3.3%) demonstrated a less consistent McGurk effect than visually normal controls (15.2% ± 2.3%) across all age groups (P = 0.0024). Effect susceptibility increased with age (P = 0.0003) for amblyopic participants and controls. Both groups showed a similar response pattern to different speakers and syllables, but amblyopic participants invariably demonstrated a less consistent effect. Amblyopia is associated with reduced McGurk effect susceptibility in children and adults. Our findings indicate that the differences do not simply indicate delayed development in children with amblyopia; rather, they represent permanent alterations that persist into adulthood. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.
van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.
2017-01-01
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127
Breaking continuous flash suppression: competing for consciousness on the pre-semantic battlefield
Gayet, Surya; Van der Stigchel, Stefan; Paffen, Chris L. E.
2014-01-01
Traditionally, interocular suppression is believed to disrupt high-level (i.e., semantic or conceptual) processing of the suppressed visual input. The development of a new experimental paradigm, breaking continuous flash suppression (b-CFS), has caused a resurgence of studies demonstrating high-level processing of visual information in the absence of visual awareness. In this method the time it takes for interocularly suppressed stimuli to breach the threshold of visibility, is regarded as a measure of access to awareness. The aim of the current review is twofold. First, we provide an overview of the literature using this b-CFS method, while making a distinction between two types of studies: those in which suppression durations are compared between different stimulus classes (such as upright faces versus inverted faces), and those in which suppression durations are compared for stimuli that either match or mismatch concurrently available information (such as a colored target that either matches or mismatches a color retained in working memory). Second, we aim at dissociating high-level processing from low-level (i.e., crude visual) processing of the suppressed stimuli. For this purpose, we include a thorough review of the control conditions that are used in these experiments. Additionally, we provide recommendations for proper control conditions that we deem crucial for disentangling high-level from low-level effects. Based on this review, we argue that crude visual processing suffices for explaining differences in breakthrough times reported using b-CFS. As such, we conclude that there is as yet no reason to assume that interocularly suppressed stimuli receive full semantic analysis. PMID:24904476
4-D Visualization of Seismic and Geodetic Data of the Big Island of Hawai'i
NASA Astrophysics Data System (ADS)
Burstein, J. A.; Smith-Konter, B. R.; Aryal, A.
2017-12-01
For decades Hawai'i has served as a natural laboratory for studying complex interactions between magmatic and seismic processes. Investigating characteristics of these processes, as well as the crustal response to major Hawaiian earthquakes, requires a synthesis of seismic and geodetic data and models. Here, we present a 4-D visualization of the Big Island of Hawai'i that investigates geospatial and temporal relationships of seismicity, seismic velocity structure, and GPS crustal motions to known volcanic and seismically active features. Using the QPS Fledermaus visualization package, we compile 90 m resolution topographic data from NASA's Shuttle Radar Topography Mission (SRTM) and 50 m resolution bathymetric data from the Hawaiian Mapping Research Group (HMRG) with a high-precision earthquake catalog of more than 130,000 events from 1992-2009 [Matoza et al., 2013] and a 3-D seismic velocity model of Hawai'i [Lin et al., 2014] based on seismic data from the Hawaiian Volcano Observatory (HVO). Long-term crustal motion vectors are integrated into the visualization from HVO GPS time-series data. These interactive data sets reveal well-defined seismic structure near the summit areas of Mauna Loa and Kilauea volcanoes, where high Vp and high Vp/Vs anomalies at 5-12 km depth, as well as clusters of low magnitude (M < 3.5) seismicity, are observed. These areas of high Vp and high Vp/Vs are interpreted as mafic dike complexes and the surrounding seismic clusters are associated with shallow magma processes. GPS data are also used to help identify seismic clusters associated with the steady crustal detachment of the south flank of Kilauea's East Rift Zone. We also investigate the fault geometry of the 2006 M6.7 Kiholo Bay earthquake event by analyzing elastic dislocation deformation modeling results [Okada, 1985] and HVO GPS and seismic data of this event. We demonstrate the 3-D fault mechanisms of the Kiholo Bay main shock as a combination of strike-slip and dip-slip components (net slip 0.55 m) delineating a 30 km east-west striking, southward-dipping fault plane, occurring at 39 km depth. This visualization serves as a resource for advancing scientific analyses of Hawaiian seismic processes, as well as an interactive educational tool for demonstrating the geospatial and geophysical structure of the Big Island of Hawai'i.
Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.
2014-01-01
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091
Cant, Jonathan S.; Xu, Yaoda
2015-01-01
Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. PMID:24964917
Visual Feedback Dominates the Sense of Agency for Brain-Machine Actions
Evans, Nathan; Gale, Steven; Schurger, Aaron; Blanke, Olaf
2015-01-01
Recent advances in neuroscience and engineering have led to the development of technologies that permit the control of external devices through real-time decoding of brain activity (brain-machine interfaces; BMI). Though the feeling of controlling bodily movements (sense of agency; SOA) has been well studied and a number of well-defined sensorimotor and cognitive mechanisms have been put forth, very little is known about the SOA for BMI-actions. Using an on-line BMI, and verifying that our subjects achieved a reasonable level of control, we sought to describe the SOA for BMI-mediated actions. Our results demonstrate that discrepancies between decoded neural activity and its resultant real-time sensory feedback are associated with a decrease in the SOA, similar to SOA mechanisms proposed for bodily actions. However, if the feedback discrepancy serves to correct a poorly controlled BMI-action, then the SOA can be high and can increase with increasing discrepancy, demonstrating the dominance of visual feedback on the SOA. Taken together, our results suggest that bodily and BMI-actions rely on common mechanisms of sensorimotor integration for agency judgments, but that visual feedback dominates the SOA in the absence of overt bodily movements or proprioceptive feedback, however erroneous the visual feedback may be. PMID:26066840
Östling, Robert; Börstell, Carl; Courtaux, Servane
2018-01-01
We use automatic processing of 120,000 sign videos in 31 different sign languages to show a cross-linguistic pattern for two types of iconic form–meaning relationships in the visual modality. First, we demonstrate that the degree of inherent plurality of concepts, based on individual ratings by non-signers, strongly correlates with the number of hands used in the sign forms encoding the same concepts across sign languages. Second, we show that certain concepts are iconically articulated around specific parts of the body, as predicted by the associational intuitions by non-signers. The implications of our results are both theoretical and methodological. With regard to theoretical implications, we corroborate previous research by demonstrating and quantifying, using a much larger material than previously available, the iconic nature of languages in the visual modality. As for the methodological implications, we show how automatic methods are, in fact, useful for performing large-scale analysis of sign language data, to a high level of accuracy, as indicated by our manual error analysis. PMID:29867684
Introducing the Big Knowledge to Use (BK2U) challenge.
Perl, Yehoshua; Geller, James; Halper, Michael; Ochs, Christopher; Zheng, Ling; Kapusnik-Uner, Joan
2017-01-01
The purpose of the Big Data to Knowledge initiative is to develop methods for discovering new knowledge from large amounts of data. However, if the resulting knowledge is so large that it resists comprehension, referred to here as Big Knowledge (BK), how can it be used properly and creatively? We call this secondary challenge, Big Knowledge to Use. Without a high-level mental representation of the kinds of knowledge in a BK knowledgebase, effective or innovative use of the knowledge may be limited. We describe summarization and visualization techniques that capture the big picture of a BK knowledgebase, possibly created from Big Data. In this research, we distinguish between assertion BK and rule-based BK (rule BK) and demonstrate the usefulness of summarization and visualization techniques of assertion BK for clinical phenotyping. As an example, we illustrate how a summary of many intracranial bleeding concepts can improve phenotyping, compared to the traditional approach. We also demonstrate the usefulness of summarization and visualization techniques of rule BK for drug-drug interaction discovery. © 2016 New York Academy of Sciences.
Introducing the Big Knowledge to Use (BK2U) challenge
Perl, Yehoshua; Geller, James; Halper, Michael; Ochs, Christopher; Zheng, Ling; Kapusnik-Uner, Joan
2016-01-01
The purpose of the Big Data to Knowledge (BD2K) initiative is to develop methods for discovering new knowledge from large amounts of data. However, if the resulting knowledge is so large that it resists comprehension, referred to here as Big Knowledge (BK), how can it be used properly and creatively? We call this secondary challenge, Big Knowledge to Use (BK2U). Without a high-level mental representation of the kinds of knowledge in a BK knowledgebase, effective or innovative use of the knowledge may be limited. We describe summarization and visualization techniques that capture the big picture of a BK knowledgebase, possibly created from Big Data. In this research, we distinguish between assertion BK and rule-based BK and demonstrate the usefulness of summarization and visualization techniques of assertion BK for clinical phenotyping. As an example, we illustrate how a summary of many intracranial bleeding concepts can improve phenotyping, compared to the traditional approach. We also demonstrate the usefulness of summarization and visualization techniques of rule-based BK for drug–drug interaction discovery. PMID:27750400
The visual discrimination of negative facial expressions by younger and older adults.
Mienaltowski, Andrew; Johnson, Ellen R; Wittman, Rebecca; Wilson, Anne-Taylor; Sturycz, Cassandra; Norman, J Farley
2013-04-05
Previous research has demonstrated that older adults are not as accurate as younger adults at perceiving negative emotions in facial expressions. These studies rely on emotion recognition tasks that involve choosing between many alternatives, creating the possibility that age differences emerge for cognitive rather than perceptual reasons. In the present study, an emotion discrimination task was used to investigate younger and older adults' ability to visually discriminate between negative emotional facial expressions (anger, sadness, fear, and disgust) at low (40%) and high (80%) expressive intensity. Participants completed trials blocked by pairs of emotions. Discrimination ability was quantified from the participants' responses using signal detection measures. In general, the results indicated that older adults had more difficulty discriminating between low intensity expressions of negative emotions than did younger adults. However, younger and older adults did not differ when discriminating between anger and sadness. These findings demonstrate that age differences in visual emotion discrimination emerge when signal detection measures are used but that these differences are not uniform and occur only in specific contexts.
Asymmetries in visual search for conjunctive targets.
Cohen, A
1993-08-01
Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.
High-fidelity data embedding for image annotation.
He, Shan; Kirovski, Darko; Wu, Min
2009-02-01
High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions.
Lewellen, Mary Jo; Goldinger, Stephen D.; Pisoni, David B.; Greene, Beth G.
2012-01-01
College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed. PMID:8371087
Infants' prospective control during object manipulation in an uncertain environment.
Gottwald, Janna M; Gredebäck, Gustaf
2015-08-01
This study investigates how infants use visual and sensorimotor information to prospectively control their actions. We gave 14-month-olds two objects of different weight and observed how high they were lifted, using a Qualisys Motion Capture System. In one condition, the two objects were visually distinct (different color condition) in another they were visually identical (same color condition). Lifting amplitudes of the first movement unit were analyzed in order to assess prospective control. Results demonstrate that infants lifted a light object higher than a heavy object, especially when vision could be used to assess weight (different color condition). When being confronted with two visually identical objects of different weight (same color condition), infants showed a different lifting pattern than what could be observed in the different color condition, expressed by a significant interaction effect between object weight and color condition on lifting amplitude. These results indicate that (a) visual information about object weight can be used to prospectively control lifting actions and that (b) infants are able to prospectively control their lifting actions even without visual information about object weight. We argue that infants, in the absence of reliable visual information about object weight, heighten their dependence on non-visual information (tactile, sensorimotor memory) in order to estimate weight and pre-adjust their lifting actions in a prospective manner.
A Simple Model of Hox Genes: Bone Morphology Demonstration
ERIC Educational Resources Information Center
Shmaefsky, Brian
2008-01-01
Visual demonstrations of abstract scientific concepts are effective strategies for enhancing content retention (Shmaefsky 2004). The concepts associated with gene regulation of growth and development are particularly complex and are well suited for teaching with visual models. This demonstration provides a simple and accurate model of Hox gene…
NASA Technical Reports Server (NTRS)
Hasler, A. Fritz
1999-01-01
The Etheater presents visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966, to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI-Onyx Graphics-Supercomputer are NASA''s visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA hurricane visualization resource video tape that has been used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS. The visualizations are produced by the NASA Goddard Visualization & Analysis Laboratory, and Scientific Visualization Studio, as well as other Goddard and NASA groups using NASA, NOAA, ESA, and NASDA Earth science datasets. Visualizations will be shown from the Earth Science ETheater 1999 recently presented in Tokyo, Paris, Munich, Sydney, Melbourne, Honolulu, Washington, New York, and Dallas. The presentation Jan 11-14 at the AMS meeting in Dallas used a 4-CPU SGI/CRAY Onyx Infinite Reality Super Graphics Workstation with 8 GB RAM and a Terabyte Disk at 3840 X 1024 resolution with triple synchronized BarcoReality 9200 projectors on a 60ft wide screen. Visualizations will also be featured from the new Earth Today Exhibit which was opened by Vice President Gore on July 2, 1998 at the Smithsonian Air & Space Museum in Washington, as well as those presented for possible use at the American Museum of Natural History (NYC), Disney EPCOT, and other venues. New methods are demonstrated for visualizing, interpreting, comparing, organizing and analyzing immense HyperImage remote sensing datasets and three dimensional numerical model results. We call the data from many new Earth sensing satellites, HyperImage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing HyperImage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed Image SpreadSheet (DISS). The DISS is being used as a high performance testbed Next Generation Internet (NGI) VisAnalysis of: 1) El Nino SSTs and NDVI response 2) Latest GOES 10 5-min rapid Scans of 26 day 5000 frame movie of March & April 198 weather and tornadic storms 3) TRMM rainfall and lightning 4)GOES 9 satellite images/winds and NOAA aircraft radar of hurricane Luis, 5) lightning detector data merged with GOES image sequences, 6) Japanese GMS, TRMM, & ADEOS data 7) Chinese FY2 data 8) Meteosat & ERS/ATSR data 9) synchronized manipulation of multiple 3D numerical model views; etc. will be illustrated. The Image SpreadSheet has been highly successful in producing Earth science visualizations for public outreach.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
Detection, prevention, and rehabilitation of amblyopia.
Spiritus, M
1997-10-01
The necessity of visual preschool screening for reducing the prevalence of amblyopia is widely accepted. The beneficial results of large-scale screening programs conducted in Scandinavia are reported. Screening monocular visual acuity at 3.5 to 4 years of age appears to be an excellent basis for detecting and treating amblyopia and an acceptable compromise between the pitfalls encountered in screening younger children and the cost-to-benefit ratio. In this respect, several preschoolers' visual acuity charts have been evaluated. New recently developed small-target random stereotests and binocular suppression tests have also been developed with the aim of correcting the many false negatives (anisometropic amblyopia or bilateral high ametropia) induced by the usual stereotests. Longitudinal studies demonstrate that correction of high refractive errors decreases the risk of amblyopia and does not impede emmetropization. The validity of various photoscreening and videoscreening procedures for detecting refractive errors in infants prior to the onset of strabismus or amblyopia, as well as alternatives to conventional occlusion therapy, is discussed.
Experimental study of thermoacoustic effects on a single plate Part I: Temperature fields
NASA Astrophysics Data System (ADS)
Wetzel, M.; Herman, C.
The thermal interaction between a heated solid plate and the acoustically driven working fluid was investigated by visualizing and quantifying the temperature fields in the neighbourhood of the solid plate. A combination of holographic interferometry and high-speed cinematography was applied in the measurements. A better knowledge of these temperature fields is essential to develop systematic design methodologies for heat exchangers in oscillatory flows. The difference between heat transfer in oscillatory flows with zero mean velocity and steady-state flows is demonstrated in the paper. Instead of heat transfer from a heated solid surface to the colder bulk fluid, the visualized temperature fields indicated that heat was transferred from the working fluid into the stack plate at the edge of the plate. In the experiments, the thermoacoustic effect was visualized through the temperature measurements. A novel evaluation procedure that accounts for the influence of the acoustic pressure variations on the refractive index was applied to accurately reconstruct the high-speed, two-dimensional oscillating temperature distributions.
Rolinski, Michal; Zokaei, Nahid; Baig, Fahd; Giehl, Kathrin; Quinnell, Timothy; Zaiwalla, Zenobia; Mackay, Clare E; Husain, Masud; Hu, Michele T M
2016-01-01
Individuals with REM sleep behaviour disorder are at significantly higher risk of developing Parkinson's disease. Here we examined visual short-term memory deficits--long associated with Parkinson's disease--in patients with REM sleep behaviour disorder without Parkinson's disease using a novel task that measures recall precision. Visual short-term memory for sequentially presented coloured bars of different orientation was assessed in 21 patients with polysomnography-proven idiopathic REM sleep behaviour disorder, 26 cases with early Parkinson's disease and 26 healthy controls. Three tasks using the same stimuli controlled for attentional filtering ability, sensorimotor and temporal decay factors. Both patients with REM sleep behaviour disorder and Parkinson's disease demonstrated a deficit in visual short-term memory, with recall precision significantly worse than in healthy controls with no deficit observed in any of the control tasks. Importantly, the pattern of memory deficit in both patient groups was specifically explained by an increase in random responses. These results demonstrate that it is possible to detect the signature of memory impairment associated with Parkinson's disease in individuals with REM sleep behaviour disorder, a condition associated with a high risk of developing Parkinson's disease. The pattern of visual short-term memory deficit potentially provides a cognitive marker of 'prodromal' Parkinson's disease that might be useful in tracking disease progression and for disease-modifying intervention trials. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain.
Imaging Carbon Nanotubes in High Performance Polymer Composites via Magnetic Force Microscope
NASA Technical Reports Server (NTRS)
Lillehei, Peter T.; Park, Cheol; Rouse, Jason H.; Siochi, Emilie J.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Application of carbon nanotubes as reinforcement in structural composites is dependent on the efficient dispersion of the nanotubes in a high performance polymer matrix. The characterization of such dispersion is limited by the lack of available tools to visualize the quality of the matrix/carbon nanotube interaction. The work reported herein demonstrates the use of magnetic force microscopy (MFM) as a promising technique for characterizing the dispersion of nanotubes in a high performance polymer matrix.
Aural, visual, and pictorial stimulus formats in false recall.
Beauchamp, Heather M
2002-12-01
The present investigation is an initial simultaneous examination of the influence of three stimulus formats on false memories. Several pilot tests were conducted to develop new category associate stimulus lists. 73 women and 26 men (M age=21.1 yr.) were in one of three conditions: they either heard words, were shown words, or were shown pictures highly related to critical nonpresented items. As expected, recall of critical nonpresented stimuli was significantly greater for aural lists than for visually presented words and pictorial images. These findings demonstrate that the accuracy of memory is influenced by the format of the information encoded.
Echocardiographic detection of ventricular septal defects in large animals.
Pipers, F S; Reef, V; Wilson, J
1985-10-15
Ventricular septal defects in a foal, a 2-year-old filly, and 2 calves were demonstrated with M-mode and two-dimensional real-time echocardiography. The studies were performed with the animals unsedated, either standing or in lateral recumbency. Cardiac windows were located between the 4th and 7th intercostal spaces, approximately at the level of the olecranon. In each case, the septal defect was visualized high in the membranous portion of the interventricular septum. Defects were visualized by use of sector scanning or linear-array ultrasonic equipment, with transducer frequencies of 2.25 to 3.5 MHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
Rajjoub, Raneem D; Trimboli-Heidler, Carmelina; Packer, Roger J; Avery, Robert A
2015-01-01
To determine the intra- and intervisit reproducibility of circumpapillary retinal nerve fiber layer (RNFL) thickness measures using eye tracking-assisted spectral-domain optical coherence tomography (SD OCT) in children with nonglaucomatous optic neuropathy. Prospective longitudinal study. Circumpapillary RNFL thickness measures were acquired with SD OCT using the eye-tracking feature at 2 separate study visits. Children with normal and abnormal vision (visual acuity ≥ 0.2 logMAR above normal and/or visual field loss) who demonstrated clinical and radiographic stability were enrolled. Intra- and intervisit reproducibility was calculated for the global average and 9 anatomic sectors by calculating the coefficient of variation and intraclass correlation coefficient. Forty-two subjects (median age 8.6 years, range 3.9-18.2 years) met inclusion criteria and contributed 62 study eyes. Both the abnormal and normal vision cohort demonstrated the lowest intravisit coefficient of variation for the global RNFL thickness. Intervisit reproducibility remained good for those with normal and abnormal vision, although small but statistically significant increases in the coefficient of variation were observed for multiple anatomic sectors in both cohorts. The magnitude of visual acuity loss was significantly associated with the global (ß = 0.026, P < .01) and temporal sector coefficient of variation (ß = 0.099, P < .01). SD OCT with eye tracking demonstrates highly reproducible RNFL thickness measures. Subjects with vision loss demonstrate greater intra- and intervisit variability than those with normal vision. Copyright © 2015 Elsevier Inc. All rights reserved.
Woolgar, Alexandra; Williams, Mark A; Rich, Anina N
2015-04-01
Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
Parks, Nathan A; Hilimire, Matthew R; Corballis, Paul M
2011-05-01
The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were "tagged" by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.
The effects of alcohol intoxication on attention and memory for visual scenes.
Harvey, Alistair J; Kneller, Wendy; Campbell, Alison C
2013-01-01
This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.
NASA Astrophysics Data System (ADS)
Lee, Ai Cheng; Ye, Jian-Shan; Ngin Tan, Swee; Poenar, Daniel P.; Sheu, Fwu-Shan; Kiat Heng, Chew; Meng Lim, Tit
2007-11-01
A novel carbon nanotube (CNT) derived label capable of dramatic signal amplification of nucleic acid detection and direct visual detection of target hybridization has been developed. Highly sensitive colorimetric detection of human acute lymphocytic leukemia (ALL) related oncogene sequences amplified by the novel CNT-based label was demonstrated. Atomic force microscope (AFM) images confirmed that a monolayer of horseradish peroxidase and detection probe molecules was immobilized along the carboxylated CNT carrier. The resulting CNT labels significantly enhanced the nucleic acid assay sensitivity by at least 1000 times compared to that of conventional labels used in enzyme-linked oligosorbent assay (ELOSA). An excellent detection limit of 1 × 10-12 M (60 × 10-18 mol in 60 µl) and a four-order wide dynamic range of target concentration were achieved. Hybridizations using these labels were coupled to a concentration-dependent formation of visible dark aggregates. Targets can thus be detected simply with visual inspection, eliminating the need for expensive and sophisticated detection systems. The approach holds promise for ultrasensitive and low cost visual inspection and colorimetric nucleic acid detection in point-of-care and early disease diagnostic application.
Multi-modal distraction: insights from children's limited attention.
Matusz, Pawel J; Broadbent, Hannah; Ferrari, Jessica; Forrest, Benjamin; Merkley, Rebecca; Scerif, Gaia
2015-03-01
How does the multi-sensory nature of stimuli influence information processing? Cognitive systems with limited selective attention can elucidate these processes. Six-year-olds, 11-year-olds and 20-year-olds engaged in a visual search task that required them to detect a pre-defined coloured shape under conditions of low or high visual perceptual load. On each trial, a peripheral distractor that could be either compatible or incompatible with the current target colour was presented either visually, auditorily or audiovisually. Unlike unimodal distractors, audiovisual distractors elicited reliable compatibility effects across the two levels of load in adults and in the older children, but high visual load significantly reduced distraction for all children, especially the youngest participants. This study provides the first demonstration that multi-sensory distraction has powerful effects on selective attention: Adults and older children alike allocate attention to potentially relevant information across multiple senses. However, poorer attentional resources can, paradoxically, shield the youngest children from the deleterious effects of multi-sensory distraction. Furthermore, we highlight how developmental research can enrich the understanding of distinct mechanisms controlling adult selective attention in multi-sensory environments. Copyright © 2014 Elsevier B.V. All rights reserved.
Brailsford, Richard; Catherwood, Di; Tyson, Philip J; Edgar, Graham
2014-01-01
Attentional biases in anxiety disorders have been assessed primarily using three types of experiment: the emotional Stroop task, the probe-detection task, and variations of the visual search task. It is proposed that the inattentional blindness procedure has the ability to overcome limitations of these paradigms in regard to identifying the components of attentional bias. Three experiments examined attentional responding to spider images in individuals with low and moderate to high spider fear. The results demonstrate that spider fear causes a bias in the engage component of visual attention and this is specific to stimuli presented in the left visual field (i.e., to the right hemisphere). The implications of the results are discussed and recommendations for future research are made.
Fisher, Anna V; Godwin, Karrie E; Seltman, Howard
2014-07-01
A large body of evidence supports the importance of focused attention for encoding and task performance. Yet young children with immature regulation of focused attention are often placed in elementary-school classrooms containing many displays that are not relevant to ongoing instruction. We investigated whether such displays can affect children's ability to maintain focused attention during instruction and to learn the lesson content. We placed kindergarten children in a laboratory classroom for six introductory science lessons, and we experimentally manipulated the visual environment in the classroom. Children were more distracted by the visual environment, spent more time off task, and demonstrated smaller learning gains when the walls were highly decorated than when the decorations were removed. © The Author(s) 2014.
Fear improves mental rotation of low-spatial-frequency visual representation.
Borst, Grégoire
2013-10-01
Previous studies have demonstrated that the brief presentation of a fearful face improves not only low-level visual processing such as contrast and orientation sensitivity but also improves visuospatial processing. In the present study, we investigated whether fear improves mental rotation efficiency (i.e., the mental rotation rate) because of the effect of fear on the sensitivity of magnocellular neurons. We asked 2 groups of participants to perform a mental rotation task with either low-pass or high-pass filtered 3-dimensional objects. Following the presentation of a fearful face, participants mentally rotated objects faster compared with when a neutral face was presented but only for low-pass filtered objects. The results suggest that fear improves mental rotation efficiency by increasing sensitivity to motion-related visual information within the magnocellular pathway.
Adjustable typography: an approach to enhancing low vision text accessibility.
Arditi, Aries
2004-04-15
Millions of people have low vision, a disability condition caused by uncorrectable or partially correctable disorders of the eye. The primary goal of low vision rehabilitation is increasing access to printed material. This paper describes how adjustable typography, a computer graphic approach to enhancing text accessibility, can play a role in this process, by allowing visually-impaired users to customize fonts to maximize legibility according to their own visual needs. Prototype software and initial testing of the concept is described. The results show that visually-impaired users tend to produce a variety of very distinct fonts, and that the adjustment process results in greatly enhanced legibility. But this initial testing has not yet demonstrated increases in legibility over and above the legibility of highly legible standard fonts such as Times New Roman.
Musical Interfaces: Visualization and Reconstruction of Music with a Microfluidic Two-Phase Flow
Mak, Sze Yi; Li, Zida; Frere, Arnaud; Chan, Tat Chuen; Shum, Ho Cheung
2014-01-01
Detection of sound wave in fluids can hardly be realized because of the lack of approaches to visualize the very minute sound-induced fluid motion. In this paper, we demonstrate the first direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interfaces respond to sound of different frequency and amplitude robustly with sufficiently precise time resolution for the recording of musical notes and even subsequent reconstruction with high fidelity. Our work shows the possibility of sensing and transmitting vibrations as tiny as those induced by sound. This robust control of the interfacial dynamics enables a platform for investigating the mechanical properties of microstructures and for studying frequency-dependent phenomena, for example, in biological systems. PMID:25327509
NASA Astrophysics Data System (ADS)
Liao, Steve M.; Gregg, Nick M.; White, Brian R.; Zeff, Benjamin W.; Bjerkaas, Katelin A.; Inder, Terrie E.; Culver, Joseph P.
2010-03-01
The neurodevelopmental outcome of neonatal intensive care unit (NICU) infants is a major clinical concern with many infants displaying neurobehavioral deficits in childhood. Functional neuroimaging may provide early recognition of neural deficits in high-risk infants. Near-infrared spectroscopy (NIRS) has the advantage of providing functional neuroimaging in infants at the bedside. However, limitations in traditional NIRS have included contamination from superficial vascular dynamics in the scalp. Furthermore, controversy exists over the nature of normal vascular, responses in infants. To address these issues, we extend the use of novel high-density NIRS arrays with multiple source-detector distances and a superficial signal regression technique to infants. Evaluations of healthy term-born infants within the first three days of life are performed without sedation using a visual stimulus. We find that the regression technique significantly improves brain activation signal quality. Furthermore, in six out of eight infants, both oxy- and total hemoglobin increases while deoxyhemoglobin decreases, suggesting that, at term, the neurovascular coupling in the visual cortex is similar to that found in healthy adults. These results demonstrate the feasibility of using high-density NIRS arrays in infants to improve signal quality through superficial signal regression, and provide a foundation for further development of high-density NIRS as a clinical tool.
NASA Astrophysics Data System (ADS)
Chen, Xiaochun; Yu, Shaoming; Yang, Liang; Wang, Jianping; Jiang, Changlong
2016-07-01
The instant and on-site detection of trace aqueous fluoride ions is still a challenge for environmental monitoring and protection. This work demonstrates a new analytical method and its utility of a paper sensor for visual detection of F- on the basis of the fluorescence resonance energy transfer (FRET) between photoluminescent graphene oxide (GO) and silver nanoparticles (AgNPs) through the formation of cyclic esters between phenylborinic acid and diol. The fluorescence of GO was quenched by the AgNPs, and trace F- can recover the fluorescence of the quenched photoluminescent GO. The increase in fluorescence intensity is proportional to the concentration of F- in the range of 0.05-0.55 nM, along with a limit of detection (LOD) as low as 9.07 pM. Following the sensing mechanism, a paper-based sensor for the visual detection of aqueous F- has been successfully developed. The paper sensor showed high sensitivity for aqueous F-, and the LOD could reach as low as 0.1 μM as observed by the naked eye. The very simple and effective strategy reported here could be extended to the visual detection of a wide range of analytes in the environment by the construction of highly efficient FRET nanoprobes.The instant and on-site detection of trace aqueous fluoride ions is still a challenge for environmental monitoring and protection. This work demonstrates a new analytical method and its utility of a paper sensor for visual detection of F- on the basis of the fluorescence resonance energy transfer (FRET) between photoluminescent graphene oxide (GO) and silver nanoparticles (AgNPs) through the formation of cyclic esters between phenylborinic acid and diol. The fluorescence of GO was quenched by the AgNPs, and trace F- can recover the fluorescence of the quenched photoluminescent GO. The increase in fluorescence intensity is proportional to the concentration of F- in the range of 0.05-0.55 nM, along with a limit of detection (LOD) as low as 9.07 pM. Following the sensing mechanism, a paper-based sensor for the visual detection of aqueous F- has been successfully developed. The paper sensor showed high sensitivity for aqueous F-, and the LOD could reach as low as 0.1 μM as observed by the naked eye. The very simple and effective strategy reported here could be extended to the visual detection of a wide range of analytes in the environment by the construction of highly efficient FRET nanoprobes. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr02878k
Neural Correlates of Individual Differences in Infant Visual Attention and Recognition Memory
Reynolds, Greg D.; Guy, Maggie W.; Zhang, Dantong
2010-01-01
Past studies have identified individual differences in infant visual attention based upon peak look duration during initial exposure to a stimulus. Colombo and colleagues (e.g., Colombo & Mitchell, 1990) found that infants that demonstrate brief visual fixations (i.e., short lookers) during familiarization are more likely to demonstrate evidence of recognition memory during subsequent stimulus exposure than infants that demonstrate long visual fixations (i.e., long lookers). The current study utilized event-related potentials to examine possible neural mechanisms associated with individual differences in visual attention and recognition memory for 6- and 7.5-month-old infants. Short- and long-looking infants viewed images of familiar and novel objects during ERP testing. There was a stimulus type by looker type interaction at temporal and frontal electrodes on the late slow wave (LSW). Short lookers demonstrated a LSW that was significantly greater in amplitude in response to novel stimulus presentations. No significant differences in LSW amplitude were found based on stimulus type for long lookers. These results indicate deeper processing and recognition memory of the familiar stimulus for short lookers. PMID:21666833
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
We present fascinating simple demonstration experiments recorded with high-speed cameras in the field of fluid dynamics. Examples include oscillations of falling droplets, effects happening upon impact of a liquid droplet into a liquid, the disintegration of extremely large droplets in free fall and the consequences of incompressibility. (Contains…
Working Memory Encoding Delays Top-Down Attention to Visual Cortex
ERIC Educational Resources Information Center
Scalf, Paige E.; Dux, Paul E.; Marois, Rene
2011-01-01
The encoding of information from one event into working memory can delay high-level, central decision-making processes for subsequent events [e.g., Jolicoeur, P., & Dell'Acqua, R. The demonstration of short-term consolidation. "Cognitive Psychology, 36", 138-202, 1998, doi:10.1006/cogp.1998.0684]. Working memory, however, is also believed to…
In-Service Teacher Education: Asking Questions for Higher Order Thinking in Visual Literacy
ERIC Educational Resources Information Center
Moodley, Visvaganthie
2013-01-01
The kinds of questions teachers ask may thwart or promote learner high-order thinking; teachers themselves must have expertise in questioning skills to promote higher order cognition among learners. Drawing on experiential knowledge of assessment, and as an English-teaching professional development programme (PDP) facilitator, I demonstrate that…
Serial functional imaging poststroke reveals visual cortex reorganization.
Brodtmann, Amy; Puce, Aina; Darby, David; Donnan, Geoffrey
2009-02-01
Visual cortical reorganization following injury remains poorly understood. The authors performed serial functional magnetic resonance imaging (fMRI) on patients with visual cortex infarction to evaluate early and late striate, ventral, and dorsal extrastriate cortical activation. Patients were studied with fMRI within 10 days and at 6 months. The authors used a high-level visual activation task designed to activate the ventral extrastriate cortex. These data were compared to those of age-appropriate healthy control participants. The results from 24 healthy control individuals (mean age 65.7 +/- SE 3.6 years, range 32-89) were compared to those from 5 stroke patients (mean age 73.8 +/- SE 7 years, range 49-86). Patients had infarcts involving the striate and ventral extrastriate cortex. Patient activation patterns were markedly different to controls. Bilateral striate and ventral extrastriate activation was reduced at both sessions, but dorsal extrastriate activated voxel counts remained comparable to controls. Conversely, mean percent magnetic resonance signal change increased in dorsal sites. These data provide strong evidence of bilateral poststroke functional depression of striate and ventral extrastriate cortices. Possible utilization or surrogacy of the dorsal visual system was demonstrated following stroke. This activity could provide a target for novel visual rehabilitation therapies.
Eye size and visual acuity influence vestibular anatomy in mammals.
Kemp, Addison D; Christopher Kirk, E
2014-04-01
The semicircular canals of the inner ear detect head rotations and trigger compensatory movements that stabilize gaze and help maintain visual fixation. Mammals with large eyes and high visual acuity require precise gaze stabilization mechanisms because they experience diminished visual functionality at low thresholds of uncompensated motion. Because semicircular canal radius of curvature is a primary determinant of canal sensitivity, species with large canal radii are expected to be capable of more precise gaze stabilization than species with small canal radii. Here, we examine the relationship between mean semicircular canal radius of curvature, eye size, and visual acuity in a large sample of mammals. Our results demonstrate that eye size and visual acuity both explain a significant proportion of the variance in mean canal radius of curvature after statistically controlling for the effects of body mass and phylogeny. These findings suggest that variation in mean semicircular canal radius of curvature among mammals is partly the result of selection for improved gaze stabilization in species with large eyes and acute vision. Our results also provide a possible functional explanation for the small semicircular canal radii of fossorial mammals and plesiadapiforms. Copyright © 2014 Wiley Periodicals, Inc.
Visual function at altitude under night vision assisted conditions.
Vecchi, Diego; Morgagni, Fabio; Guadagno, Anton G; Lucertini, Marco
2014-01-01
Hypoxia, even mild, is known to produce negative effects on visual function, including decreased visual acuity and sensitivity to contrast, mostly in low light. This is of special concern when night vision devices (NVDs) are used during flight because they also provide poor images in terms of resolution and contrast. While wearing NVDs in low light conditions, 16 healthy male aviators were exposed to a simulated altitude of 12,500 ft in a hypobaric chamber. Snellen visual acuity decreased in normal light from 28.5 +/- 4.2/20 (normoxia) to 37.2 +/- 7.4/20 (hypoxia) and, in low light, from 33.8 +/- 6.1/20 (normoxia) to 42.2 +/- 8.4/20 (hypoxia), both at a significant level. An association was found between blood oxygen saturation and visual acuity without significance. No changes occurred in terms of sensitivity to contrast. Our data demonstrate that mild hypoxia is capable of affecting visual acuity and the photopic/high mesopic range of NVD-aided vision. This may be due to several reasons, including the sensitivity to hypoxia of photoreceptors and other retinal cells. Contrast sensitivity is possibly preserved under NVD-aided vision due to its dependency on the goggles' gain.
Recovery of Visual Search following Moderate to Severe Traumatic Brain Injury
Schmitter-Edgecombe, Maureen; Robertson, Kayela
2015-01-01
Introduction Deficits in attentional abilities can significantly impact rehabilitation and recovery from traumatic brain injury (TBI). This study investigated the nature and recovery of pre-attentive (parallel) and attentive (serial) visual search abilities after TBI. Methods Participants were 40 individuals with moderate to severe TBI who were tested following emergence from post-traumatic amnesia and approximately 8-months post-injury, as well as 40 age and education matched controls. Pre-attentive (automatic) and attentive (controlled) visual search situations were created by manipulating the saliency of the target item amongst distractor items in visual displays. The relationship between pre-attentive and attentive visual search rates and follow-up community integration were also explored. Results The results revealed intact parallel (automatic) processing skills in the TBI group both post-acutely and at follow-up. In contrast, when attentional demands on visual search were increased by reducing the saliency of the target, the TBI group demonstrated poorer performances compared to the control group both post-acutely and 8-months post-injury. Neither pre-attentive nor attentive visual search slope values correlated with follow-up community integration. Conclusions These results suggest that utilizing intact pre-attentive visual search skills during rehabilitation may help to reduce high mental workload situations, thereby improving the rehabilitation process. For example, making commonly used objects more salient in the environment should increase reliance or more automatic visual search processes and reduce visual search time for individuals with TBI. PMID:25671675
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-01-01
Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818
Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F
2007-10-15
Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.
Are all children with visual impairment known to the eye clinic?
Pilling, Rachel F; Outhwaite, Louise
2017-04-01
There is a growing body of evidence that children with special needs are more likely to have visual problems, be that visual impairment, visual processing problems or refractive error. While there is widespread provision of vision screening in mainstream schools, patchy provision exists in special schools. The aim of the study was to determine the unmet need and undiagnosed visual problems of children attending primary special schools in Bradford, England. Children attending special schools who were not currently under the care of the hospital eye service were identified. Assessments of visual function and refractive error were undertaken on site at the schools by an experienced orthoptist and/or paediatric ophthalmologist. A total of 157 children were identified as eligible for the study, with a mean age of 7.8 years (range 4-12 years). Of these, 33% of children were found to have visual impairment, as defined by WHO and six children were eligible for severe sight impairment certification. The study demonstrates significant unmet need or undiagnosed visual impairment in a high-risk population. It also highlights the poor uptake of hospital eye care for children identified with significant visual needs and suggests the importance of providing in-school assessment and support, including refractive correction, to fully realise the benefits of a visual assessment programme. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reza Akrami, Seyed Mohammad; Miyata, Kazuki; Asakawa, Hitoshi
High-speed atomic force microscopy has attracted much attention due to its unique capability of visualizing nanoscale dynamic processes at a solid/liquid interface. However, its usability and resolution have yet to be improved. As one of the solutions for this issue, here we present a design of a high-speed Z-tip scanner with screw holding mechanism. We perform detailed comparison between designs with different actuator size and screw arrangement by finite element analysis. Based on the design giving the best performance, we have developed a Z tip scanner and measured its performance. The measured frequency response of the scanner shows a flatmore » response up to ∼10 kHz. This high frequency response allows us to achieve wideband tip-sample distance regulation. We demonstrate the applicability of the scanner to high-speed atomic-resolution imaging by visualizing atomic-scale calcite crystal dissolution process in water at 2 s/frame.« less
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
Moreira Neto, Carlos Augusto; Moreira Junior, Carlos Augusto
2013-01-01
To evaluate 5 patients with serous macular detachment due to optic disc pit that were submitted to pars plana vitrectomy and were followed for at least 7 years. Patients were submitted to pars plana vitrectomy, posterior hyaloid removal, autologous serum injection and gas-fluid exchange, without laser photocoagulation, and were evaluated pre and post-operatively with visual acuity and Amsler grid testing, retinography, and recently, with autofluorescence imaging and high resolution OCT. All 5 eyes improved visual acuity significantly following the surgical procedure maintaining good vision throughout the follow-up period. Mean pre-operative visual acuity was 20/400 and final visual acuity was 20/27 with a mean follow-up time of 13.6 years. No recurrences of serous detachments were observed. OCT examinations demonstrated an attached retina up to the margin of the pit. Serous macular detachments due to optic disc pits were adequately treated with pars plana vitrectomy and gas fluid exchange, without the need for laser photocoagulation, maintaining excellent visual results for a long period of time.
From optics to attention: visual perception in barn owls.
Harmening, Wolf M; Wagner, Hermann
2011-11-01
Barn owls are nocturnal predators which have evolved specific sensory and morphological adaptations to a life in dim light. Here, some of the most fundamental properties of spatial vision in barn owls are reviewed. The eye with its tubular shape is rigidly integrated in the skull so that eye movements are very much restricted. The eyes are oriented frontally, allowing for a large binocular overlap. Accommodation, but not pupil dilation, is coupled between the two eyes. The retina is rod dominated and lacks a visible fovea. Retinal ganglion cells form a marked region of highest density that extends to a horizontally oriented visual streak. Behavioural visual acuity and contrast sensitivity are poor, although the optical quality of the ocular media is excellent. A low f-number allows high image quality at low light levels. Vernier acuity was found to be a hyperacute percept. Owls have global stereopsis with hyperacute stereo acuity thresholds. Neurons of the visual Wulst are sensitive to binocular disparities. Orientation based saliency was demonstrated in a visual-search experiment, and higher cognitive abilities were shown when the owl's were able to use illusory contours for object discrimination.
Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Tamura, Ryu; Endo, Taro; Someya, Toshiyuki
2013-01-01
Individuals with autistic spectrum disorder (ASD) demonstrate an impaired ability to infer the mental states of others from their gaze. Thus, investigating the relationship between ASD and eye gaze processing is crucial for understanding the neural basis of social impairments seen in individuals with ASD. In addition, characteristics of ASD are observed in more comprehensive visual perception tasks. These visual characteristics of ASD have been well-explained in terms of the atypical relationship between high- and low-level gaze processing in ASD. We studied neural activity during gaze processing in individuals with ASD using magnetoencephalography, with a focus on the relationship between high- and low-level gaze processing both temporally and spatially. Minimum Current Estimate analysis was applied to perform source analysis of magnetic responses to gaze stimuli. The source analysis showed that later activity in the primary visual area (V1) was affected by gaze direction only in the ASD group. Conversely, the right posterior superior temporal sulcus, which is a brain region that processes gaze as a social signal, in the typically developed group showed a tendency toward greater activation during direct compared with averted gaze processing. These results suggest that later activity in V1 relating to gaze processing is altered or possibly enhanced in high-functioning individuals with ASD, which may underpin the social cognitive impairments in these individuals. © 2013 S. Karger AG, Basel.
Tested Demonstrations: Visualization of Buffer Action and the Acidifying Effect of Carbon Dioxide.
ERIC Educational Resources Information Center
Gilbert, George L., Ed.
1985-01-01
Presents a buffer demonstration which features visualization of the effects of carbon dioxide on pH. Background information, list of materials needed, procedures used, and a discussion of results obtained are included. (JN)
Beauty and the beholder: the role of visual sensitivity in visual preference
Spehar, Branka; Wong, Solomon; van de Klundert, Sarah; Lui, Jessie; Clifford, Colin W. G.; Taylor, Richard P.
2015-01-01
For centuries, the essence of aesthetic experience has remained one of the most intriguing mysteries for philosophers, artists, art historians and scientists alike. Recently, views emphasizing the link between aesthetics, perception and brain function have become increasingly prevalent (Ramachandran and Hirstein, 1999; Zeki, 1999; Livingstone, 2002; Ishizu and Zeki, 2013). The link between art and the fractal-like structure of natural images has also been highlighted (Spehar et al., 2003; Graham and Field, 2007; Graham and Redies, 2010). Motivated by these claims and our previous findings that humans display a consistent preference across various images with fractal-like statistics, here we explore the possibility that observers’ preference for visual patterns might be related to their sensitivity for such patterns. We measure sensitivity to simple visual patterns (sine-wave gratings varying in spatial frequency and random textures with varying scaling exponent) and find that they are highly correlated with visual preferences exhibited by the same observers. Although we do not attempt to offer a comprehensive neural model of aesthetic experience, we demonstrate a strong relationship between visual sensitivity and preference for simple visual patterns. Broadly speaking, our results support assertions that there is a close relationship between aesthetic experience and the sensory coding of natural stimuli. PMID:26441611
Nielsen, Simon; Wilms, L Inge
2014-01-01
We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.
Predicting prescribed magnification.
Wolffsohn, James S; Eperjesi, Frank
2004-07-01
To determine the best method of estimating the optimum magnification needed by visually impaired patients. The magnification of low vision aids prescribed to 187 presbyopic visually impaired patients for reading newspapers or books was compared with logMAR distance and near acuity (at 25 cm) and magnification predicted by +4 D step near additions. Distance letter (r = 0.58) and near word visual acuity (r = 0.67) were strongly correlated to the prescribed magnification as were predictive formulae based on these measures. Prediction using the effect of proximal magnification resulted in a similar correlation (r = 0.67) and prediction was poorer in those who did not benefit from proximal magnification. The difference between prescribed and predicted magnification was found to be unrelated to the condition causing visual impairment (F = 2.57, p = 0.08), the central visual field status (F = 0.57, p = 0.57) and patient psychology (F = 0.44, p = 0.51), but was higher in those prescribed stand magnifiers than high near additions (F = 5.99, p < 0.01). The magnification necessary to perform normal visual tasks can be predicted in the majority of cases using visual acuity measures, although measuring the effect of proximal magnification demonstrates the effect of stronger glasses and identifies those in whom prescribed magnification is more difficult to predict.
A new metaphor for projection-based visual analysis and data exploration
NASA Astrophysics Data System (ADS)
Schreck, Tobias; Panse, Christian
2007-01-01
In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.
Inspection of Pole-Like Structures Using a Visual-Inertial Aided VTOL Platform with Shared Autonomy
Sa, Inkyu; Hrabar, Stefan; Corke, Peter
2015-01-01
This paper presents an algorithm and a system for vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structures such as light and power distribution poles is a difficult task that is time-consuming, dangerous and expensive. Recently, micro VTOL platforms (i.e., quad-, hexa- and octa-rotors) have been rapidly gaining interest in research, military and even public domains. The unmanned, low-cost and VTOL properties of these platforms make them ideal for situations where inspection would otherwise be time-consuming and/or hazardous to humans. There are, however, challenges involved with developing such an inspection system, for example flying in close proximity to a target while maintaining a fixed stand-off distance from it, being immune to wind gusts and exchanging useful information with the remote user. To overcome these challenges, we require accurate and high-update rate state estimation and high performance controllers to be implemented onboard the vehicle. Ease of control and a live video feed are required for the human operator. We demonstrate a VTOL platform that can operate at close-quarters, whilst maintaining a safe stand-off distance and rejecting environmental disturbances. Two approaches are presented: Position-Based Visual Servoing (PBVS) using an Extended Kalman Filter (EKF) and estimator-free Image-Based Visual Servoing (IBVS). Both use monocular visual, inertia, and sonar data, allowing the approaches to be applied for indoor or GPS-impaired environments. We extensively compare the performances of PBVS and IBVS in terms of accuracy, robustness and computational costs. Results from simulations and indoor/outdoor (day and night) flight experiments demonstrate the system is able to successfully inspect and circumnavigate a vertical pole. PMID:26340631
Emotion based attentional priority for storage in visual short-term memory.
Simione, Luca; Calabrese, Lucia; Marucci, Francesco S; Belardinelli, Marta Olivetti; Raffone, Antonino; Maratos, Frances A
2014-01-01
A plethora of research demonstrates that the processing of emotional faces is prioritised over non-emotive stimuli when cognitive resources are limited (this is known as 'emotional superiority'). However, there is debate as to whether competition for processing resources results in emotional superiority per se, or more specifically, threat superiority. Therefore, to investigate prioritisation of emotional stimuli for storage in visual short-term memory (VSTM), we devised an original VSTM report procedure using schematic (angry, happy, neutral) faces in which processing competition was manipulated. In Experiment 1, display exposure time was manipulated to create competition between stimuli. Participants (n = 20) had to recall a probed stimulus from a set size of four under high (150 ms array exposure duration) and low (400 ms array exposure duration) perceptual processing competition. For the high competition condition (i.e. 150 ms exposure), results revealed an emotional superiority effect per se. In Experiment 2 (n = 20), we increased competition by manipulating set size (three versus five stimuli), whilst maintaining a constrained array exposure duration of 150 ms. Here, for the five-stimulus set size (i.e. maximal competition) only threat superiority emerged. These findings demonstrate attentional prioritisation for storage in VSTM for emotional faces. We argue that task demands modulated the availability of processing resources and consequently the relative magnitude of the emotional/threat superiority effect, with only threatening stimuli prioritised for storage in VSTM under more demanding processing conditions. Our results are discussed in light of models and theories of visual selection, and not only combine the two strands of research (i.e. visual selection and emotion), but highlight a critical factor in the processing of emotional stimuli is availability of processing resources, which is further constrained by task demands.
Steady-state visual evoked potentials as a research tool in social affective neuroscience
Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas
2017-01-01
Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794
Style grammars for interactive visualization of architecture.
Aliaga, Daniel G; Rosen, Paul A; Bekins, Daniel R
2007-01-01
Interactive visualization of architecture provides a way to quickly visualize existing or novel buildings and structures. Such applications require both fast rendering and an effortless input regimen for creating and changing architecture using high-level editing operations that automatically fill in the necessary details. Procedural modeling and synthesis is a powerful paradigm that yields high data amplification and can be coupled with fast-rendering techniques to quickly generate plausible details of a scene without much or any user interaction. Previously, forward generating procedural methods have been proposed where a procedure is explicitly created to generate particular content. In this paper, we present our work in inverse procedural modeling of buildings and describe how to use an extracted repertoire of building grammars to facilitate the visualization and quick modification of architectural structures and buildings. We demonstrate an interactive application where the user draws simple building blocks and, using our system, can automatically complete the building "in the style of" other buildings using view-dependent texture mapping or nonphotorealistic rendering techniques. Our system supports an arbitrary number of building grammars created from user subdivided building models and captured photographs. Using only edit, copy, and paste metaphors, the entire building styles can be altered and transferred from one building to another in a few operations, enhancing the ability to modify an existing architectural structure or to visualize a novel building in the style of the others.
Visual detection of nucleic acids based on Mie scattering and the magnetophoretic effect.
Zhao, Zichen; Chen, Shan; Ho, John Kin Lim; Chieng, Ching-Chang; Chen, Ting-Hsuan
2015-12-07
Visual detection of nucleic acid biomarkers is a simple and convenient approach to point-of-care applications. However, issues of sensitivity and the handling of complex bio-fluids have posed challenges. Here we report on a visual method detecting nucleic acids using Mie scattering of polystyrene microparticles and the magnetophoretic effect. Magnetic microparticles (MMPs) and polystyrene microparticles (PMPs) were surface-functionalised with oligonucleotide probes, which can hybridise with target oligonucleotides in juxtaposition and lead to the formation of MMPs-targets-PMPs sandwich structures. Using an externally applied magnetic field, the magnetophoretic effect attracts the sandwich structure to the sidewall, which reduces the suspended PMPs and leads to a change in the light transmission via the Mie scattering. Based on the high extinction coefficient of the Mie scattering (∼3 orders of magnitude greater than that of the commonly used gold nanoparticles), our results showed the limit of detection to be 4 pM using a UV-Vis spectrometer or 10 pM by direct visual inspection. Meanwhile, we also demonstrated that this method is compatible with multiplex assays and detection in complex bio-fluids, such as whole blood or a pool of nucleic acids, without purification in advance. With a simplified operation procedure, low instrumentation requirement, high sensitivity and compatibility with complex bio-fluids, this method provides an ideal solution for visual detection of nucleic acids in resource-limited settings.
Visual Aid to Demonstrate Change of State and Gas Pressure with Temperature
ERIC Educational Resources Information Center
Ghaffari, Shahrokh
2011-01-01
Demonstrations are used in chemistry lectures to improve conceptual understanding by direct observation. The visual aid described here is designed to demonstrate the change in state of matter with the change of temperature and the change of pressure with temperature. Temperature is presented by the rate of airflow and pressure is presented by…
NASA Astrophysics Data System (ADS)
Herold, Julia; Abouna, Sylvie; Zhou, Luxian; Pelengaris, Stella; Epstein, David B. A.; Khan, Michael; Nattkemper, Tim W.
2009-02-01
In the last years, bioimaging has turned from qualitative measurements towards a high-throughput and highcontent modality, providing multiple variables for each biological sample analyzed. We present a system which combines machine learning based semantic image annotation and visual data mining to analyze such new multivariate bioimage data. Machine learning is employed for automatic semantic annotation of regions of interest. The annotation is the prerequisite for a biological object-oriented exploration of the feature space derived from the image variables. With the aid of visual data mining, the obtained data can be explored simultaneously in the image as well as in the feature domain. Especially when little is known of the underlying data, for example in the case of exploring the effects of a drug treatment, visual data mining can greatly aid the process of data evaluation. We demonstrate how our system is used for image evaluation to obtain information relevant to diabetes study and screening of new anti-diabetes treatments. Cells of the Islet of Langerhans and whole pancreas in pancreas tissue samples are annotated and object specific molecular features are extracted from aligned multichannel fluorescence images. These are interactively evaluated for cell type classification in order to determine the cell number and mass. Only few parameters need to be specified which makes it usable also for non computer experts and allows for high-throughput analysis.
Direct imaging of isofrequency contours in photonic structures
Regan, E. C.; Igarashi, Y.; Zhen, B.; ...
2016-11-25
The isofrequency contours of a photonic crystal are important for predicting and understanding exotic optical phenomena that are not apparent from high-symmetry band structure visualizations. We demonstrate a method to directly visualize the isofrequency contours of high-quality photonic crystal slabs that show quantitatively good agreement with numerical results throughout the visible spectrum. Our technique relies on resonance-enhanced photon scattering from generic fabrication disorder and surface roughness, so it can be applied to general photonic and plasmonic crystals or even quasi-crystals. We also present an analytical model of the scattering process, which explains the observation of isofrequency contours in our technique.more » Furthermore, the isofrequency contours provide information about the characteristics of the disorder and therefore serve as a feedback tool to improve fabrication processes.« less
Plant photonics: application of optical coherence tomography to monitor defects and rots in onion
NASA Astrophysics Data System (ADS)
Meglinski, I. V.; Buranachai, C.; Terry, L. A.
2010-04-01
The incidence of physiological and/or pathological defects in many fresh produce types is still unacceptably high and accounts for a large proportion of waste. With increasing interest in food security their remains strong demand in developing reliable and cost effective technologies for non-destructive screening of internal defects and rots, these being deemed unacceptable by consumers. It is well recognized that the internal defects and structure of turbid scattering media can be effectively visualized by using optical coherence tomography (OCT). In the present study, the high spatial resolution and advantages of OCT have been demonstrated for imaging the skins and outer laminae (concentric tissue layers) of intact whole onion bulbs with a view to non-invasively visualizing potential incidence/severity of internal defects.
Adaptation aftereffects in the perception of gender from biological motion.
Troje, Nikolaus F; Sadr, Javid; Geyer, Henning; Nakayama, Ken
2006-07-28
Human visual perception is highly adaptive. While this has been known and studied for a long time in domains such as color vision, motion perception, or the processing of spatial frequency, a number of more recent studies have shown that adaptation and adaptation aftereffects also occur in high-level visual domains like shape perception and face recognition. Here, we present data that demonstrate a pronounced aftereffect in response to adaptation to the perceived gender of biological motion point-light walkers. A walker that is perceived to be ambiguous in gender under neutral adaptation appears to be male after adaptation with an exaggerated female walker and female after adaptation with an exaggerated male walker. We discuss this adaptation aftereffect as a tool to characterize and probe the mechanisms underlying biological motion perception.
Filming the invisible - time-resolved visualization of compressible flows
NASA Astrophysics Data System (ADS)
Kleine, H.
2010-04-01
Essentially all processes in gasdynamics are invisible to the naked eye as they occur in a transparent medium. The task to observe them is further complicated by the fact that most of these processes are also transient, often with characteristic times that are considerably below the threshold of human perception. Both difficulties can be overcome by combining visualization methods that reveal changes in the transparent medium, and high-speed photography techniques that “stop” the motion of the flow. The traditional approach is to reconstruct a transient process from a series of single images, each taken in a different experiment at a different instant. This approach, which is still widely used today, can only be expected to give reliable results when the process is reproducible. Truly time-resolved visualization, which yields a sequence of flow images in a single experiment, has been attempted for more than a century, but many of the developed camera systems were characterized by a high level of complexity and limited quality of the results. Recent advances in digital high-speed photography have changed this situation and have provided the tools to investigate, with relative ease and in sufficient detail, the true development of a transient flow with characteristic time scales down to one microsecond. This paper discusses the potential and the limitations one encounters when using density-sensitive visualization techniques in time-resolved mode. Several examples illustrate how this approach can reveal and explain a number of previously undetected phenomena in a variety of highly transient compressible flows. It is demonstrated that time-resolved visualization offers numerous advantages which normally outweigh its shortcomings, mainly the often-encountered loss in resolution. Apart from the capability to track the location and/or shape of flow features in space and time, adequate time-resolved visualization allows one to observe the development of deliberately introduced near-isentropic perturbation wavelets. This new diagnostic tool can be used to qualitatively and quantitatively determine otherwise inaccessible thermodynamic properties of a compressible flow.
Recording Visual Evoked Potentials and Auditory Evoked P300 at 9.4T Static Magnetic Field
Hahn, David; Boers, Frank; Shah, N. Jon
2013-01-01
Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has shown a number of advantages that make this multimodal technique superior to fMRI alone. The feasibility of recording EEG at ultra-high static magnetic field up to 9.4T was recently demonstrated and promises to be implemented soon in fMRI studies at ultra high magnetic fields. Recording visual evoked potentials are expected to be amongst the most simple for simultaneous EEG/fMRI at ultra-high magnetic field due to the easy assessment of the visual cortex. Auditory evoked P300 measurements are of interest since it is believed that they represent the earliest stage of cognitive processing. In this study, we investigate the feasibility of recording visual evoked potentials and auditory evoked P300 in a 9.4T static magnetic field. For this purpose, EEG data were recorded from 26 healthy volunteers inside a 9.4T MR scanner using a 32-channel MR compatible EEG system. Visual stimulation and auditory oddball paradigm were presented in order to elicit evoked related potentials (ERP). Recordings made outside the scanner were performed using the same stimuli and EEG system for comparison purposes. We were able to retrieve visual P100 and auditory P300 evoked potentials at 9.4T static magnetic field after correction of the ballistocardiogram artefact using independent component analysis. The latencies of the ERPs recorded at 9.4T were not different from those recorded at 0T. The amplitudes of ERPs were higher at 9.4T when compared to recordings at 0T. Nevertheless, it seems that the increased amplitudes of the ERPs are due to the effect of the ultra-high field on the EEG recording system rather than alteration in the intrinsic processes that generate the electrophysiological responses. PMID:23650538
Recording visual evoked potentials and auditory evoked P300 at 9.4T static magnetic field.
Arrubla, Jorge; Neuner, Irene; Hahn, David; Boers, Frank; Shah, N Jon
2013-01-01
Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has shown a number of advantages that make this multimodal technique superior to fMRI alone. The feasibility of recording EEG at ultra-high static magnetic field up to 9.4 T was recently demonstrated and promises to be implemented soon in fMRI studies at ultra high magnetic fields. Recording visual evoked potentials are expected to be amongst the most simple for simultaneous EEG/fMRI at ultra-high magnetic field due to the easy assessment of the visual cortex. Auditory evoked P300 measurements are of interest since it is believed that they represent the earliest stage of cognitive processing. In this study, we investigate the feasibility of recording visual evoked potentials and auditory evoked P300 in a 9.4 T static magnetic field. For this purpose, EEG data were recorded from 26 healthy volunteers inside a 9.4 T MR scanner using a 32-channel MR compatible EEG system. Visual stimulation and auditory oddball paradigm were presented in order to elicit evoked related potentials (ERP). Recordings made outside the scanner were performed using the same stimuli and EEG system for comparison purposes. We were able to retrieve visual P100 and auditory P300 evoked potentials at 9.4 T static magnetic field after correction of the ballistocardiogram artefact using independent component analysis. The latencies of the ERPs recorded at 9.4 T were not different from those recorded at 0 T. The amplitudes of ERPs were higher at 9.4 T when compared to recordings at 0 T. Nevertheless, it seems that the increased amplitudes of the ERPs are due to the effect of the ultra-high field on the EEG recording system rather than alteration in the intrinsic processes that generate the electrophysiological responses.
Visual Fast Mapping in School-Aged Children with Specific Language Impairment
ERIC Educational Resources Information Center
Alt, Mary
2013-01-01
Purpose: To determine whether children with specific language impairment (SLI) demonstrate impaired visual fast mapping skills compared with unimpaired peers and to test components of visual working memory that may contribute to a visual working memory deficit. Methods: Fifty children (25 SLI) played 2 computer-based visual fast mapping games…
The role of working memory in auditory selective attention.
Dalton, Polly; Santangelo, Valerio; Spence, Charles
2009-11-01
A growing body of research now demonstrates that working memory plays an important role in controlling the extent to which irrelevant visual distractors are processed during visual selective attention tasks (e.g., Lavie, Hirst, De Fockert, & Viding, 2004). Recently, it has been shown that the successful selection of tactile information also depends on the availability of working memory (Dalton, Lavie, & Spence, 2009). Here, we investigate whether working memory plays a role in auditory selective attention. Participants focused their attention on short continuous bursts of white noise (targets) while attempting to ignore pulsed bursts of noise (distractors). Distractor interference in this auditory task, as measured in terms of the difference in performance between congruent and incongruent distractor trials, increased significantly under high (vs. low) load in a concurrent working-memory task. These results provide the first evidence demonstrating a causal role for working memory in reducing interference by irrelevant auditory distractors.
Nguyen, My Hanh T; Witkin, Andre J; Reichel, Elias; Ko, Tony H; Fujimoto, James G; Schuman, Joel S; Duker, Jay S
2007-01-01
Histopathological studies of acute multiple evanescent white dot syndrome (MEWDS) have not been reported because of the transient and benign nature of the disease. Ultrahigh resolution optical coherence tomography (UHR-OCT), capable of high resolution in vivo imaging, offers a unique opportunity to visualize retinal microstructure in the disease. UHR-OCT images of the maculae of five patients with MEWDS were obtained and analyzed. Diagnosis was based on clinical presentation, examination, visual field testing, and angiography. UHR-OCT revealed disturbances in the photoreceptor inner/outer segment junction (IS/OS) in each of the five patients (six eyes) with MEWDS. In addition, thinning of the outer nuclear layer was seen in the case of recurrent MEWDS, suggesting that repeated episodes of MEWDS may result in photoreceptor atrophy. Subtle disruptions of the photoreceptor IS/OS are demonstrated in all eyes affected by MEWDS. UHR-OCT may be a useful adjunct to diagnosis and monitoring of MEWDS.
An interactive visualization tool for mobile objects
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuo
Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.
Intact anger recognition in depression despite aberrant visual facial information usage.
Clark, Cameron M; Chiu, Carina G; Diaz, Ruth L; Goghari, Vina M
2014-08-01
Previous literature has indicated abnormalities in facial emotion recognition abilities, as well as deficits in basic visual processes in major depression. However, the literature is unclear on a number of important factors including whether or not these abnormalities represent deficient or enhanced emotion recognition abilities compared to control populations, and the degree to which basic visual deficits might impact this process. The present study investigated emotion recognition abilities for angry versus neutral facial expressions in a sample of undergraduate students with Beck Depression Inventory-II (BDI-II) scores indicative of moderate depression (i.e., ≥20), compared to matched low-BDI-II score (i.e., ≤2) controls via the Bubbles Facial Emotion Perception Task. Results indicated unimpaired behavioural performance in discriminating angry from neutral expressions in the high depressive symptoms group relative to the minimal depressive symptoms group, despite evidence of an abnormal pattern of visual facial information usage. The generalizability of the current findings is limited by the highly structured nature of the facial emotion recognition task used, as well as the use of an analog sample undergraduates scoring high in self-rated symptoms of depression rather than a clinical sample. Our findings suggest that basic visual processes are involved in emotion recognition abnormalities in depression, demonstrating consistency with the emotion recognition literature in other psychopathologies (e.g., schizophrenia, autism, social anxiety). Future research should seek to replicate these findings in clinical populations with major depression, and assess the association between aberrant face gaze behaviours and symptom severity and social functioning. Copyright © 2014 Elsevier B.V. All rights reserved.
VisBricks: multiform visualization of large, inhomogeneous data.
Lex, Alexander; Schulz, Hans-Jörg; Streit, Marc; Partl, Christian; Schmalstieg, Dieter
2011-12-01
Large volumes of real-world data often exhibit inhomogeneities: vertically in the form of correlated or independent dimensions and horizontally in the form of clustered or scattered data items. In essence, these inhomogeneities form the patterns in the data that researchers are trying to find and understand. Sophisticated statistical methods are available to reveal these patterns, however, the visualization of their outcomes is mostly still performed in a one-view-fits-all manner. In contrast, our novel visualization approach, VisBricks, acknowledges the inhomogeneity of the data and the need for different visualizations that suit the individual characteristics of the different data subsets. The overall visualization of the entire data set is patched together from smaller visualizations, there is one VisBrick for each cluster in each group of interdependent dimensions. Whereas the total impression of all VisBricks together gives a comprehensive high-level overview of the different groups of data, each VisBrick independently shows the details of the group of data it represents. State-of-the-art brushing and visual linking between all VisBricks furthermore allows the comparison of the groupings and the distribution of data items among them. In this paper, we introduce the VisBricks visualization concept, discuss its design rationale and implementation, and demonstrate its usefulness by applying it to a use case from the field of biomedicine. © 2011 IEEE
Kojic, L; Gu, Q; Douglas, R M; Cynader, M S
2001-02-28
Both cholinergic and serotonergic modulatory projections to mammalian striate cortex have been demonstrated to be involved in the regulation of postnatal plasticity, and a striking alteration in the number and intracortical distribution of cholinergic and serotonergic receptors takes place during the critical period for cortical plasticity. As well, agonists of cholinergic and serotonergic receptors have been demonstrated to facilitate induction of long-term synaptic plasticity in visual cortical slices supporting their involvement in the control of activity-dependent plasticity. We recorded field potentials from layers 4 and 2/3 in visual cortex slices of 60--80 day old kittens after white matter stimulation, before and after a period of high frequency stimulation (HFS), in the absence or presence of either cholinergic or serotonergic agonists. At these ages, the HFS protocol alone almost never induced long-term changes of synaptic plasticity in either layers 2/3 or 4. In layer 2/3, agonist stimulation of m1 receptors facilitated induction of long-term potentiation (LTP) with HFS stimulation, while the activation of serotonergic receptors had only a modest effect. By contrast, a strong serotonin-dependent LTP facilitation and insignificant muscarinic effects were observed after HFS within layer 4. The results show that receptor-dependent laminar stratification of synaptic modifiability occurs in the cortex at these ages. This plasticity may underly a control system gating the experience-dependent changes of synaptic organization within developing visual cortex.
Horn, R R; Williams, A M; Scott, M A; Hodges, N J
2005-07-01
The authors examined the observational learning of 24 participants whom they constrained to use the model by removing intrinsic visual knowledge of results (KR). Matched participants assigned to video (VID), point-light (PL), and no-model (CON) groups performed a soccer-chipping task in which vision was occluded at ball contact. Pre- and posttests were interspersed with alternating periods of demonstration and acquisition. The authors assessed delayed retention 2-3 days later. In support of the visual perception perspective, the participants who observed the models showed immediate and enduring changes to more closely imitate the model's relative motion. While observing the demonstration, the PL group participants were more selective in their visual search than were the VID group participants but did not perform more accurately or learn more.
Visually Lossless JPEG 2000 for Remote Image Browsing
Oh, Han; Bilgin, Ali; Marcellin, Michael
2017-01-01
Image sizes have increased exponentially in recent years. The resulting high-resolution images are often viewed via remote image browsing. Zooming and panning are desirable features in this context, which result in disparate spatial regions of an image being displayed at a variety of (spatial) resolutions. When an image is displayed at a reduced resolution, the quantization step sizes needed for visually lossless quality generally increase. This paper investigates the quantization step sizes needed for visually lossless display as a function of resolution, and proposes a method that effectively incorporates the resulting (multiple) quantization step sizes into a single JPEG2000 codestream. This codestream is JPEG2000 Part 1 compliant and allows for visually lossless decoding at all resolutions natively supported by the wavelet transform as well as arbitrary intermediate resolutions, using only a fraction of the full-resolution codestream. When images are browsed remotely using the JPEG2000 Interactive Protocol (JPIP), the required bandwidth is significantly reduced, as demonstrated by extensive experimental results. PMID:28748112
Image gathering and digital restoration for fidelity and visual quality
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1991-01-01
The fidelity and resolution of the traditional Wiener restorations given in the prevalent digital processing literature can be significantly improved when the transformations between the continuous and discrete representations in image gathering and display are accounted for. However, the visual quality of these improved restorations also is more sensitive to the defects caused by aliasing artifacts, colored noise, and ringing near sharp edges. In this paper, these visual defects are characterized, and methods for suppressing them are presented. It is demonstrated how the visual quality of fidelity-maximized images can be improved when (1) the image-gathering system is specifically designed to enhance the performance of the image-restoration algorithm, and (2) the Wiener filter is combined with interactive Gaussian smoothing, synthetic high edge enhancement, and nonlinear tone-scale transformation. The nonlinear transformation is used primarily to enhance the spatial details that are often obscurred when the normally wide dynamic range of natural radiance fields is compressed into the relatively narrow dynamic range of film and other displays.
Giudice, Nicholas A.; Betty, Maryann R.; Loomis, Jack M.
2012-01-01
This research examines whether visual and haptic map learning yield functionally equivalent spatial images in working memory, as evidenced by similar encoding bias and updating performance. In three experiments, participants learned four-point routes either by seeing or feeling the maps. At test, blindfolded participants made spatial judgments about the maps from imagined perspectives that were either aligned or misaligned with the maps as represented in working memory. Results from Experiments 1 and 2 revealed a highly similar pattern of latencies and errors between visual and haptic conditions. These findings extend the well known alignment biases for visual map learning to haptic map learning, provide further evidence of haptic updating, and most importantly, show that learning from the two modalities yields very similar performance across all conditions. Experiment 3 found the same encoding biases and updating performance with blind individuals, demonstrating that functional equivalence cannot be due to visual recoding and is consistent with an amodal hypothesis of spatial images. PMID:21299331
Comparative case study between D3 and highcharts on lustre data visualization
NASA Astrophysics Data System (ADS)
ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott
2013-12-01
One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).
Scientific Visualization Using the Flow Analysis Software Toolkit (FAST)
NASA Technical Reports Server (NTRS)
Bancroft, Gordon V.; Kelaita, Paul G.; Mccabe, R. Kevin; Merritt, Fergus J.; Plessel, Todd C.; Sandstrom, Timothy A.; West, John T.
1993-01-01
Over the past few years the Flow Analysis Software Toolkit (FAST) has matured into a useful tool for visualizing and analyzing scientific data on high-performance graphics workstations. Originally designed for visualizing the results of fluid dynamics research, FAST has demonstrated its flexibility by being used in several other areas of scientific research. These research areas include earth and space sciences, acid rain and ozone modelling, and automotive design, just to name a few. This paper describes the current status of FAST, including the basic concepts, architecture, existing functionality and features, and some of the known applications for which FAST is being used. A few of the applications, by both NASA and non-NASA agencies, are outlined in more detail. Described in the Outlines are the goals of each visualization project, the techniques or 'tricks' used lo produce the desired results, and custom modifications to FAST, if any, done to further enhance the analysis. Some of the future directions for FAST are also described.
Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P
2006-08-01
Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.
Doppler Lidar Vector Retrievals and Atmospheric Data Visualization in Mixed/Augmented Reality
NASA Astrophysics Data System (ADS)
Cherukuru, Nihanth Wagmi
Environmental remote sensing has seen rapid growth in the recent years and Doppler wind lidars have gained popularity primarily due to their non-intrusive, high spatial and temporal measurement capabilities. While lidar applications early on, relied on the radial velocity measurements alone, most of the practical applications in wind farm control and short term wind prediction require knowledge of the vector wind field. Over the past couple of years, multiple works on lidars have explored three primary methods of retrieving wind vectors viz., using homogeneous windfield assumption, computationally extensive variational methods and the use of multiple Doppler lidars. Building on prior research, the current three-part study, first demonstrates the capabilities of single and dual Doppler lidar retrievals in capturing downslope windstorm-type flows occurring at Arizona's Barringer Meteor Crater as a part of the METCRAX II field experiment. Next, to address the need for a reliable and computationally efficient vector retrieval for adaptive wind farm control applications, a novel 2D vector retrieval based on a variational formulation was developed and applied on lidar scans from an offshore wind farm and validated with data from a cup and vane anemometer installed on a nearby research platform. Finally, a novel data visualization technique using Mixed Reality (MR)/ Augmented Reality (AR) technology is presented to visualize data from atmospheric sensors. MR is an environment in which the user's visual perception of the real world is enhanced with live, interactive, computer generated sensory input (in this case, data from atmospheric sensors like Doppler lidars). A methodology using modern game development platforms is presented and demonstrated with lidar retrieved wind fields. In the current study, the possibility of using this technology to visualize data from atmospheric sensors in mixed reality is explored and demonstrated with lidar retrieved wind fields as well as a few earth science datasets for education and outreach activities.
NASA/NOAA: Earth Science Electronic Theater 1999
NASA Technical Reports Server (NTRS)
Hasler, A. Fritz
1999-01-01
The Electronic Theater (E-theater) presents visualizations which span the period from the original Suomi/Hasler animations of the first ATS-1 GEO weather satellite images in 1966 to the latest 1999 NASA Earth Science Vision for the next 25 years. Hot off the SGI-Onyx Graphics-Supercomputer are NASA's visualizations of Hurricanes Mitch, Georges, Fran and Linda. These storms have been recently featured on the covers of National Geographic, Time, Newsweek and Popular Science. Highlights will be shown from the NASA hurricane visualization resource video tape that has been used repeatedly this season on National and International network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1-min GOES images that appeared in the November BAMS. The visualizations are produced by the NASA Goddard Visualization and Analysis Laboratory (VAL/912), and Scientific Visualization Studio (SVS/930), as well as other Goddard and NASA groups using NASA, NOAA, ESA, and NASDA Earth science datasets. Visualizations will be shown from the Earth Science E-Theater 1999 recently presented in Tokyo, Paris, Munich, Sydney, Melbourne, Honolulu, Washington, New York, and Dallas. The presentation Jan 11-14 at the AMS meeting in Dallas used a 4-CPU SGI/CRAY Onyx Infinite Reality Super Graphics Workstation with 8 GB RAM and a Terabyte Disk at 3840 X 1024 resolution with triple synchronized BarcoReality 9200 projectors on a 60ft wide screen. Visualizations will also be featured from the new Earth Today Exhibit which was opened by Vice President Gore on July 2, 1998 at the Smithsonian Air & Space museum in Washington, as well as those presented for possible use at the American Museum of Natural History (NYC), Disney EPCOT, and other venues. New methods are demonstrated for visualizing, interpreting, comparing, organizing and analyzing immense HyperImage remote sensing datasets and three dimensional numerical model results. We call the data from many new Earth sensing satellites, HyperImage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing HyperImage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed image SpreadSheet (DISS). The DISS is being used as a high performance testbed Next Generation Internet (NGI) VisAnalysis of: 1) El Nino SSTs and NDVI response 2) Latest GOES 10 5-min rapid Scans of 26 day 5000 frame movie of March & April '98 weather and tornadic storms 3) TRMM rainfall and lightning 4)GOES 9 satellite images/winds and NOAA aircraft radar of hurricane Luis, 5) lightning detector data merged with GOES image sequences, 6) Japanese GMS, TRMM, & ADEOS data 7) Chinese FY2 data 8) Meteosat & ERS/ATSR data 9) synchronized manipulation of multiple 3D numerical model views; and others will be illustrated. The Image SpreadSheet has been highly successful in producing Earth science visualizations for public outreach. Many of these visualizations have been widely disseminated through the world wide web pages of the HPCC/LTP/RSD program which can be found at http://rsd.gsfc.nasa.gov/rsd The one min interval animations of Hurricane Luis on ABC Nightline and the color perspective rendering of Hurricane Fran published by TIME, LIFE, Newsweek, Popular Science, National Geographic, Scientific American, and the "Weekly Reader" are some of the examples which will be shown.
An "artificial retina" processor for track reconstruction at the full LHC crossing rate
NASA Astrophysics Data System (ADS)
Abba, A.; Bedeschi, F.; Caponio, F.; Cenci, R.; Citterio, M.; Cusimano, A.; Fu, J.; Geraci, A.; Grizzuti, M.; Lusardi, N.; Marino, P.; Morello, M. J.; Neri, N.; Ninci, D.; Petruzzo, M.; Piucci, A.; Punzi, G.; Ristori, L.; Spinella, F.; Stracka, S.; Tonelli, D.; Walsh, J.
2016-07-01
We present the latest results of an R&D study for a specialized processor capable of reconstructing, in a silicon pixel detector, high-quality tracks from high-energy collision events at 40 MHz. The processor applies a highly parallel pattern-recognition algorithm inspired to quick detection of edges in mammals visual cortex. After a detailed study of a real-detector application, demonstrating that online reconstruction of offline-quality tracks is feasible at 40 MHz with sub-microsecond latency, we are implementing a prototype using common high-bandwidth FPGA devices.
An "artificial retina" processor for track reconstruction at the full LHC crossing rate
Abba, A.; F. Bedeschi; Caponio, F.; ...
2015-10-23
Here, we present the latest results of an R&D; study for a specialized processor capable of reconstructing, in a silicon pixel detector, high-quality tracks from high-energy collision events at 40 MHz. The processor applies a highly parallel pattern-recognition algorithm inspired to quick detection of edges in mammals visual cortex. After a detailed study of a real-detector application, demonstrating that online reconstruction of offline-quality tracks is feasible at 40 MHz with sub-microsecond latency, we are implementing a prototype using common high-bandwidth FPGA devices.
Novel Integration of Frame Rate Up Conversion and HEVC Coding Based on Rate-Distortion Optimization.
Guo Lu; Xiaoyun Zhang; Li Chen; Zhiyong Gao
2018-02-01
Frame rate up conversion (FRUC) can improve the visual quality by interpolating new intermediate frames. However, high frame rate videos by FRUC are confronted with more bitrate consumption or annoying artifacts of interpolated frames. In this paper, a novel integration framework of FRUC and high efficiency video coding (HEVC) is proposed based on rate-distortion optimization, and the interpolated frames can be reconstructed at encoder side with low bitrate cost and high visual quality. First, joint motion estimation (JME) algorithm is proposed to obtain robust motion vectors, which are shared between FRUC and video coding. What's more, JME is embedded into the coding loop and employs the original motion search strategy in HEVC coding. Then, the frame interpolation is formulated as a rate-distortion optimization problem, where both the coding bitrate consumption and visual quality are taken into account. Due to the absence of original frames, the distortion model for interpolated frames is established according to the motion vector reliability and coding quantization error. Experimental results demonstrate that the proposed framework can achieve 21% ~ 42% reduction in BDBR, when compared with the traditional methods of FRUC cascaded with coding.
Enhanced compressed sensing for visual target tracking in wireless visual sensor networks
NASA Astrophysics Data System (ADS)
Qiang, Guo
2017-11-01
Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.
Visual traffic jam analysis based on trajectory data.
Wang, Zuchao; Lu, Min; Yuan, Xiaoru; Zhang, Junping; van de Wetering, Huub
2013-12-01
In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.
2017-01-01
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790
Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G
2017-11-03
In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.
Srinivasan, Vivek J.; Adler, Desmond C.; Chen, Yueli; Gorczynska, Iwona; Huber, Robert; Duker, Jay S.; Schuman, Joel S.; Fujimoto, James G.
2009-01-01
Purpose To demonstrate ultrahigh-speed optical coherence tomography (OCT) imaging of the retina and optic nerve head at 249,000 axial scans per second and a wavelength of 1060 nm. To investigate methods for visualization of the retina, choroid, and optic nerve using high-density sampling enabled by improved imaging speed. Methods A swept-source OCT retinal imaging system operating at a speed of 249,000 axial scans per second was developed. Imaging of the retina, choroid, and optic nerve were performed. Display methods such as speckle reduction, slicing along arbitrary planes, en face visualization of reflectance from specific retinal layers, and image compounding were investigated. Results High-definition and three-dimensional (3D) imaging of the normal retina and optic nerve head were performed. Increased light penetration at 1060 nm enabled improved visualization of the choroid, lamina cribrosa, and sclera. OCT fundus images and 3D visualizations were generated with higher pixel density and less motion artifacts than standard spectral/Fourier domain OCT. En face images enabled visualization of the porous structure of the lamina cribrosa, nerve fiber layer, choroid, photoreceptors, RPE, and capillaries of the inner retina. Conclusions Ultrahigh-speed OCT imaging of the retina and optic nerve head at 249,000 axial scans per second is possible. The improvement of ∼5 to 10× in imaging speed over commercial spectral/Fourier domain OCT technology enables higher density raster scan protocols and improved performance of en face visualization methods. The combination of the longer wavelength and ultrahigh imaging speed enables excellent visualization of the choroid, sclera, and lamina cribrosa. PMID:18658089
A comparison of visual and quantitative methods to identify interstitial lung abnormalities.
Kliment, Corrine R; Araki, Tetsuro; Doyle, Tracy J; Gao, Wei; Dupuis, Josée; Latourelle, Jeanne C; Zazueta, Oscar E; Fernandez, Isis E; Nishino, Mizuki; Okajima, Yuka; Ross, James C; Estépar, Raúl San José; Diaz, Alejandro A; Lederer, David J; Schwartz, David A; Silverman, Edwin K; Rosas, Ivan O; Washko, George R; O'Connor, George T; Hatabu, Hiroto; Hunninghake, Gary M
2015-10-29
Evidence suggests that individuals with interstitial lung abnormalities (ILA) on a chest computed tomogram (CT) may have an increased risk to develop a clinically significant interstitial lung disease (ILD). Although methods used to identify individuals with ILA on chest CT have included both automated quantitative and qualitative visual inspection methods, there has been not direct comparison between these two methods. To investigate this relationship, we created lung density metrics and compared these to visual assessments of ILA. To provide a comparison between ILA detection methods based on visual assessment we generated measures of high attenuation areas (HAAs, defined by attenuation values between -600 and -250 Hounsfield Units) in >4500 participants from both the COPDGene and Framingham Heart studies (FHS). Linear and logistic regressions were used for analyses. Increased measures of HAAs (in ≥ 10 % of the lung) were significantly associated with ILA defined by visual inspection in both cohorts (P < 0.0001); however, the positive predictive values were not very high (19 % in COPDGene and 13 % in the FHS). In COPDGene, the association between HAAs and ILA defined by visual assessment were modified by the percentage of emphysema and body mass index. Although increased HAAs were associated with reductions in total lung capacity in both cohorts, there was no evidence for an association between measurement of HAAs and MUC5B promoter genotype in the FHS. Our findings demonstrate that increased measures of lung density may be helpful in determining the severity of lung volume reduction, but alone, are not strongly predictive of ILA defined by visual assessment. Moreover, HAAs were not associated with MUC5B promoter genotype.
Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald
2016-11-01
The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Liu, Tao; Jung, HaeWon; Liu, Jianfei; Droettboom, Michael; Tam, Johnny
2017-10-01
The retinal pigment epithelial (RPE) cells contain intrinsic fluorophores that can be visualized using infrared autofluorescence (IRAF). Although IRAF is routinely utilized in the clinic for visualizing retinal health and disease, currently, it is not possible to discern cellular details using IRAF due to limits in resolution. We demonstrate that the combination of adaptive optics (AO) with IRAF (AO-IRAF) enables higher-resolution imaging of the IRAF signal, revealing the RPE mosaic in the living human eye. Quantitative analysis of visualized RPE cells in 10 healthy subjects across various eccentricities demonstrates the possibility for in vivo density measurements of RPE cells, which range from 6505 to 5388 cells/mm 2 for the areas measured (peaking at the fovea). We also identified cone photoreceptors in relation to underlying RPE cells, and found that RPE cells support on average up to 18.74 cone photoreceptors in the fovea down to an average of 1.03 cone photoreceptors per RPE cell at an eccentricity of 6 mm. Clinical application of AO-IRAF to a patient with retinitis pigmentosa illustrates the potential for AO-IRAF imaging to become a valuable complementary approach to the current landscape of high resolution imaging modalities.
An ambusher's arsenal: chemical crypsis in the puff adder (Bitis arietans)
Miller, Ashadee Kay; Maritz, Bryan; McKay, Shannon; Glaudas, Xavier; Alexander, Graham J.
2015-01-01
Ambush foragers use a hunting strategy that places them at risk of predation by both visual and olfaction-oriented predators. Resulting selective pressures have driven the evolution of impressive visual crypsis in many ambushing species, and may have led to the development of chemical crypsis. However, unlike for visual crypsis, few studies have attempted to demonstrate chemical crypsis. Field observations of puff adders (Bitis arietans) going undetected by several scent-orientated predator and prey species led us to investigate chemical crypsis in this ambushing species. We trained dogs (Canis familiaris) and meerkats (Suricata suricatta) to test whether a canid and a herpestid predator could detect B. arietans using olfaction. We also tested for chemical crypsis in five species of active foraging snakes, predicted to be easily detectable. Dogs and meerkats unambiguously indicated active foraging species, but failed to correctly indicate puff adder, confirming that B. arietans employs chemical crypsis. This is the first demonstration of chemical crypsis anti-predatory behaviour, though the phenomenon may be widespread among ambushers, especially those that experience high mortality rates owing to predation. Our study provides additional evidence for the existence of an ongoing chemically mediated arms race between predator and prey species. PMID:26674950
An ambusher's arsenal: chemical crypsis in the puff adder (Bitis arietans).
Miller, Ashadee Kay; Maritz, Bryan; McKay, Shannon; Glaudas, Xavier; Alexander, Graham J
2015-12-22
Ambush foragers use a hunting strategy that places them at risk of predation by both visual and olfaction-oriented predators. Resulting selective pressures have driven the evolution of impressive visual crypsis in many ambushing species, and may have led to the development of chemical crypsis. However, unlike for visual crypsis, few studies have attempted to demonstrate chemical crypsis. Field observations of puff adders (Bitis arietans) going undetected by several scent-orientated predator and prey species led us to investigate chemical crypsis in this ambushing species. We trained dogs (Canis familiaris) and meerkats (Suricata suricatta) to test whether a canid and a herpestid predator could detect B. arietans using olfaction. We also tested for chemical crypsis in five species of active foraging snakes, predicted to be easily detectable. Dogs and meerkats unambiguously indicated active foraging species, but failed to correctly indicate puff adder, confirming that B. arietans employs chemical crypsis. This is the first demonstration of chemical crypsis anti-predatory behaviour, though the phenomenon may be widespread among ambushers, especially those that experience high mortality rates owing to predation. Our study provides additional evidence for the existence of an ongoing chemically mediated arms race between predator and prey species. © 2015 The Author(s).
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis; ...
2015-02-13
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
A novel RPE65 inhibitor CU239 suppresses visual cycle and prevents retinal degeneration.
Shin, Younghwa; Moiseyev, Gennadiy; Petrukhin, Konstantin; Cioffi, Christopher L; Muthuraman, Parthasarathy; Takahashi, Yusuke; Ma, Jian-Xing
2018-07-01
The retinoid visual cycle is an ocular retinoid metabolism specifically dedicated to support vertebrate vision. The visual cycle serves not only to generate light-sensitive visual chromophore 11-cis-retinal, but also to clear toxic byproducts of normal visual cycle (i.e. all-trans-retinal and its condensation products) from the retina, ensuring both the visual function and the retinal health. Unfortunately, various conditions including genetic predisposition, environment and aging may attribute to a functional decline of the all-trans-retinal clearance. To combat all-trans-retinal mediated retinal degeneration, we sought to slow down the retinoid influx from the RPE by inhibiting the visual cycle with a small molecule. The present study describes identification of CU239, a novel non-retinoid inhibitor of RPE65, a key enzyme in the visual cycle. Our data demonstrated that CU239 selectively inhibited isomerase activity of RPE65, with IC 50 of 6 μM. Further, our results indicated that CU239 inhibited RPE65 via competition with its substrate all-trans-retinyl ester. Mice with systemic injection of CU239 exhibited delayed chromophore regeneration after light bleach, and conferred a partial protection of the retina against injury from high intensity light. Taken together, CU239 is a potent visual cycle modulator and may have a therapeutic potential for retinal degeneration. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.
Peigneux, P; Salmon, E; van der Linden, M; Garraux, G; Aerts, J; Delfiore, G; Degueldre, C; Luxen, A; Orban, G; Franck, G
2000-06-01
Humans, like numerous other species, strongly rely on the observation of gestures of other individuals in their everyday life. It is hypothesized that the visual processing of human gestures is sustained by a specific functional architecture, even at an early prelexical cognitive stage, different from that required for the processing of other visual entities. In the present PET study, the neural basis of visual gesture analysis was investigated with functional neuroimaging of brain activity during naming and orientation tasks performed on pictures of either static gestures (upper-limb postures) or tridimensional objects. To prevent automatic object-related cerebral activation during the visual processing of postures, only intransitive postures were selected, i. e., symbolic or meaningless postures which do not imply the handling of objects. Conversely, only intransitive objects which cannot be handled were selected to prevent gesture-related activation during their visual processing. Results clearly demonstrate a significant functional segregation between the processing of static intransitive postures and the processing of intransitive tridimensional objects. Visual processing of objects elicited mainly occipital and fusiform gyrus activity, while visual processing of postures strongly activated the lateral occipitotemporal junction, encroaching upon area MT/V5, involved in motion analysis. These findings suggest that the lateral occipitotemporal junction, working in association with area MT/V5, plays a prominent role in the high-level perceptual analysis of gesture, namely the construction of its visual representation, available for subsequent recognition or imitation. Copyright 2000 Academic Press.
Computational high-resolution optical imaging of the living human retina
NASA Astrophysics Data System (ADS)
Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.
2015-07-01
High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.
Technical note: real-time web-based wireless visual guidance system for radiotherapy.
Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho
2017-06-01
Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.
Luft, Joseph R.; Wolfley, Jennifer R.; Snell, Edward H.
2011-01-01
Observations of crystallization experiments are classified as specific outcomes and integrated through a phase diagram to visualize solubility and thereby direct subsequent experiments. Specific examples are taken from our high-throughput crystallization laboratory which provided a broad scope of data from 20 million crystallization experiments on 12,500 different biological macromolecules. The methods and rationale are broadly and generally applicable in any crystallization laboratory. Through a combination of incomplete factorial sampling of crystallization cocktails, standard outcome classifications, visualization of outcomes as they relate chemically and application of a simple phase diagram approach we demonstrate how to logically design subsequent crystallization experiments. PMID:21643490
Astronomical Simulations Using Visual Python
NASA Astrophysics Data System (ADS)
Cobb, Michael L.
2007-05-01
The Physics and Engineering Physics Department at Southeast Missouri State University has adopted the “Matter and Interactions I Modern Mechanics” text by Chabay and Sherwood for our calculus based introductory physics course. We have fully integrated the use of modeling and simulations by using the Visual Python language also know as VPython. This powerful, high level, object orientated language with full three dimensional, stereo graphics has stimulated both my students and myself to find wider applications for our new found skills. We have successfully modeled gravitational resonances in planetary rings, galaxy collisions, and planetary orbits around binary star systems. This talk will provide a quick overview of VPython and demonstrate the various simulations.
A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.
Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent
2007-07-20
Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.
Ohl, Alisha M; Graze, Hollie; Weber, Karen; Kenny, Sabrina; Salvatore, Christie; Wagreich, Sarah
2013-01-01
This study examined the efficacy of a 10-wk Tier 1 Response to Intervention (RtI) program developed in collaboration with classroom teachers to improve the fine motor and visual-motor skills of general education kindergarten students. We recruited 113 students in six elementary schools. Two general education kindergarten classrooms at each school participated in the study. Classrooms were randomly assigned to the intervention and control groups. Fine motor skills, pencil grip, and visual-motor integration were measured at the beginning of the school year and after the 10-wk intervention. The intervention group demonstrated a statistically significant increase in fine motor and visual-motor skills, whereas the control group demonstrated a slight decline in both areas. Neither group demonstrated a change in pencil grip. This study provides preliminary evidence that a Tier 1 RtI program can improve fine motor and visual-motor skills in kindergarten students. Copyright © 2013 by the American Occupational Therapy Association, Inc.
Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill
2014-01-01
Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.
Knowledge is power: how conceptual knowledge transforms visual cognition.
Collins, Jessica A; Olson, Ingrid R
2014-08-01
In this review, we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks that demonstrate interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are in our understanding of the visual environment, and to demonstrate the need for future research aimed at understanding how such interactions arise in the brain.
Knowledge is Power: How Conceptual Knowledge Transforms Visual Cognition
Collins, Jessica A.; Olson, Ingrid R.
2014-01-01
In this review we synthesize the existing literature demonstrating the dynamic interplay between conceptual knowledge and visual perceptual processing. We consider two theoretical frameworks demonstrating interactions between processes and brain areas traditionally considered perceptual or conceptual. Specifically, we discuss categorical perception, in which visual objects are represented according to category membership, and highlight studies showing that category knowledge can penetrate early stages of visual analysis. We next discuss the embodied account of conceptual knowledge, which holds that concepts are instantiated in the same neural regions required for specific types of perception and action, and discuss the limitations of this framework. We additionally consider studies showing that gaining abstract semantic knowledge about objects and faces leads to behavioral and electrophysiological changes that are indicative of more efficient stimulus processing. Finally, we consider the role that perceiver goals and motivation may play in shaping the interaction between conceptual and perceptual processing. We hope to demonstrate how pervasive such interactions between motivation, conceptual knowledge, and perceptual processing are to our understanding of the visual environment, and demonstrate the need for future research aimed at understanding how such interactions arise in the brain. PMID:24402731
Shin, Dmitriy; Kovalenko, Mikhail; Ersoy, Ilker; Li, Yu; Doll, Donald; Shyu, Chi-Ren; Hammer, Richard
2017-01-01
Background: Visual heuristics of pathology diagnosis is a largely unexplored area where reported studies only provided a qualitative insight into the subject. Uncovering and quantifying pathology visual and nonvisual diagnostic patterns have great potential to improve clinical outcomes and avoid diagnostic pitfalls. Methods: Here, we present PathEdEx, an informatics computational framework that incorporates whole-slide digital pathology imaging with multiscale gaze-tracking technology to create web-based interactive pathology educational atlases and to datamine visual and nonvisual diagnostic heuristics. Results: We demonstrate the capabilities of PathEdEx for mining visual and nonvisual diagnostic heuristics using the first PathEdEx volume of a hematopathology atlas. We conducted a quantitative study on the time dynamics of zooming and panning operations utilized by experts and novices to come to the correct diagnosis. We then performed association rule mining to determine sets of diagnostic factors that consistently result in a correct diagnosis, and studied differences in diagnostic strategies across different levels of pathology expertise using Markov chain (MC) modeling and MC Monte Carlo simulations. To perform these studies, we translated raw gaze points to high-explanatory semantic labels that represent pathology diagnostic clues. Therefore, the outcome of these studies is readily transformed into narrative descriptors for direct use in pathology education and practice. Conclusion: PathEdEx framework can be used to capture best practices of pathology visual and nonvisual diagnostic heuristics that can be passed over to the next generation of pathologists and have potential to streamline implementation of precision diagnostics in precision medicine settings. PMID:28828200
Shin, Dmitriy; Kovalenko, Mikhail; Ersoy, Ilker; Li, Yu; Doll, Donald; Shyu, Chi-Ren; Hammer, Richard
2017-01-01
Visual heuristics of pathology diagnosis is a largely unexplored area where reported studies only provided a qualitative insight into the subject. Uncovering and quantifying pathology visual and nonvisual diagnostic patterns have great potential to improve clinical outcomes and avoid diagnostic pitfalls. Here, we present PathEdEx, an informatics computational framework that incorporates whole-slide digital pathology imaging with multiscale gaze-tracking technology to create web-based interactive pathology educational atlases and to datamine visual and nonvisual diagnostic heuristics. We demonstrate the capabilities of PathEdEx for mining visual and nonvisual diagnostic heuristics using the first PathEdEx volume of a hematopathology atlas. We conducted a quantitative study on the time dynamics of zooming and panning operations utilized by experts and novices to come to the correct diagnosis. We then performed association rule mining to determine sets of diagnostic factors that consistently result in a correct diagnosis, and studied differences in diagnostic strategies across different levels of pathology expertise using Markov chain (MC) modeling and MC Monte Carlo simulations. To perform these studies, we translated raw gaze points to high-explanatory semantic labels that represent pathology diagnostic clues. Therefore, the outcome of these studies is readily transformed into narrative descriptors for direct use in pathology education and practice. PathEdEx framework can be used to capture best practices of pathology visual and nonvisual diagnostic heuristics that can be passed over to the next generation of pathologists and have potential to streamline implementation of precision diagnostics in precision medicine settings.
The consequence of spatial visual processing dysfunction caused by traumatic brain injury (TBI).
Padula, William V; Capo-Aponte, Jose E; Padula, William V; Singman, Eric L; Jenness, Jonathan
2017-01-01
A bi-modal visual processing model is supported by research to affect dysfunction following a traumatic brain injury (TBI). TBI causes dysfunction of visual processing affecting binocularity, spatial orientation, posture and balance. Research demonstrates that prescription of prisms influence the plasticity between spatial visual processing and motor-sensory systems improving visual processing and reducing symptoms following a TBI. The rationale demonstrates that visual processing underlies the functional aspects of binocularity, balance and posture. The bi-modal visual process maintains plasticity for efficiency. Compromise causes Post Trauma Vision Syndrome (PTVS) and Visual Midline Shift Syndrome (VMSS). Rehabilitation through use of lenses, prisms and sectoral occlusion has inter-professional implications in rehabilitation affecting the plasticity of the bi-modal visual process, thereby improving binocularity, spatial orientation, posture and balance Main outcomes: This review provides an opportunity to create a new perspective of the consequences of TBI on visual processing and the symptoms that are often caused by trauma. It also serves to provide a perspective of visual processing dysfunction that has potential for developing new approaches of rehabilitation. Understanding vision as a bi-modal process facilitates a new perspective of visual processing and the potentials for rehabilitation following a concussion, brain injury or other neurological events.
Calderone, Daniel J.; Hoptman, Matthew J.; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J.; Bar, Moshe; Javitt, Daniel C.; Butler, Pamela D.
2013-01-01
Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The “frame and fill” model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object “framing” circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia. PMID:22735157
NASA Astrophysics Data System (ADS)
Rohini, B. S.; Nagabhushana, H.; Darshan, G. P.; Basavaraj, R. B.; Sharma, S. C.; Sudarmani, R.
2017-11-01
In Forensic investigation, identification of various types of ridge details are essential in order to fix the criminals associated in various crimes. Even though several methods and labeling agents are available to visualize latent finger prints (LFPs) there is still simple, accurate, cost-effective, and non-destructive tool is required. In the present work, CeO2 nanopowders (NPs) are prepared via simple solution combustion route using Tamarindus indica fruit extract as a fuel. The optimized NPs are utilized for visualization of LFPs on various surfaces by powder dusting method. Results revealed that visualized LFPs exhibit Level 3 features such as pores and ridge contours under normal light with high sensitivity and without background hindrance. The photometric characteristics of the prepared samples exhibit blue color emission and highly useful in warm light emitting diodes. The photocatalytic studies were carried out with different Methylene blue (MB) dye concentration and pH values. The obtained results reveal that the CeO2 NPs exhibits an excellent catalytic properties which can act as a good catalytic reagent. The findings demonstrate that the prepared NPs are quite useful as a labeling agent for visualization of LFPs, efficient catalysts for dye degradation as well as solid-state lighting applications.
Chen, Xiaochun; Yu, Shaoming; Yang, Liang; Wang, Jianping; Jiang, Changlong
2016-07-14
The instant and on-site detection of trace aqueous fluoride ions is still a challenge for environmental monitoring and protection. This work demonstrates a new analytical method and its utility of a paper sensor for visual detection of F(-) on the basis of the fluorescence resonance energy transfer (FRET) between photoluminescent graphene oxide (GO) and silver nanoparticles (AgNPs) through the formation of cyclic esters between phenylborinic acid and diol. The fluorescence of GO was quenched by the AgNPs, and trace F(-) can recover the fluorescence of the quenched photoluminescent GO. The increase in fluorescence intensity is proportional to the concentration of F(-) in the range of 0.05-0.55 nM, along with a limit of detection (LOD) as low as 9.07 pM. Following the sensing mechanism, a paper-based sensor for the visual detection of aqueous F(-) has been successfully developed. The paper sensor showed high sensitivity for aqueous F(-), and the LOD could reach as low as 0.1 μM as observed by the naked eye. The very simple and effective strategy reported here could be extended to the visual detection of a wide range of analytes in the environment by the construction of highly efficient FRET nanoprobes.
Lighten the Load: Scaffolding Visual Literacy in Biochemistry and Molecular Biology.
Offerdahl, Erika G; Arneson, Jessie B; Byrne, Nicholas
2017-01-01
The development of scientific visual literacy has been identified as critical to the training of tomorrow's scientists and citizens alike. Within the context of the molecular life sciences in particular, visual representations frequently incorporate various components, such as discipline-specific graphical and diagrammatic features, varied levels of abstraction, and spatial arrangements of visual elements to convey information. Visual literacy is achieved when an individual understands the various ways in which a discipline uses these components to represent a particular way of knowing. Owing to the complex nature of visual representations, the activities through which visual literacy is developed have high cognitive load. Cognitive load can be reduced by first helping students to become fluent with the discrete components of visual representations before asking them to simultaneously integrate these components to extract the intended meaning of a representation. We present a taxonomy for characterizing one component of visual representations-the level of abstraction-as a first step in understanding the opportunities afforded students to develop fluency. Further, we demonstrate how our taxonomy can be used to analyze course assessments and spur discussions regarding the extent to which the development of visual literacy skills is supported by instruction within an undergraduate biochemistry curriculum. © 2017 E. G. Offerdahl et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Carrico, Ruth M; Coty, Mary B; Goss, Linda K; Lajoie, Andrew S
2007-02-01
This pilot study was conducted to determine whether supplementing standard classroom training methods regarding respiratory disease transmission with a visual demonstration could improve the use of personal protective equipment among emergency department nurses. Participants included 20 emergency department registered nurses randomized into 2 groups: control and intervention. The intervention group received supplemental training using the visual demonstration of respiratory particle dispersion. Both groups were then observed throughout their work shifts as they provided care during January-March 2005. Participants who received supplemental visual training correctly utilized personal protective equipment statistically more often than did participants who received only the standard classroom training. Supplementing the standard training methods with a visual demonstration can improve the use of personal protective equipment during care of patients exhibiting respiratory symptoms.
Yang, Weidong; Musser, Siegfried M.
2008-01-01
The utility of single molecule fluorescence (SMF) for understanding biological reactions has been amply demonstrated by a diverse series of studies over the last decade. In large part, the molecules of interest have been limited to those within a small focal volume or near a surface to achieve the high sensitivity required for detecting the inherently weak signals arising from individual molecules. Consequently, the investigation of molecular behavior with high time and spatial resolution deep within cells using SMF has remained challenging. Recently, we demonstrated that narrow-field epifluorescence microscopy allows visualization of nucleocytoplasmic transport at the single cargo level. We describe here the methodological approach that yields 2 ms and ∼15 nm resolution for a stationary particle. The spatial resolution for a mobile particle is inherently worse, and depends on how fast the particle is moving. The signal-to-noise ratio is sufficiently high to directly measure the time a single cargo molecule spends interacting with the nuclear pore complex. Particle tracking analysis revealed that cargo molecules randomly diffuse within the nuclear pore complex, exiting as a result of a single rate-limiting step. We expect that narrow-field epifluorescence microscopy will be useful for elucidating other binding and trafficking events within cells. PMID:16879979
Lee, Sang Bong; Lee, Ho Won; Singh, Thoudam Debraj; Li, Yinghua; Kim, Sang Kyoon; Cho, Sung Jin; Lee, Sang-Woo; Jeong, Shin Young; Ahn, Byeong-Cheol; Choi, Sangil; Lee, In-Kyu; Lim, Dong-Kwon; Lee, Jaetae; Jeon, Yong Hyun
2017-01-01
Reliable and sensitive imaging tools are required to track macrophage migration and provide a better understating of their biological roles in various diseases. Here, we demonstrate the possibility of radioactive iodide-embedded gold nanoparticles (RIe-AuNPs) as a cell tracker for nuclear medicine imaging. To demonstrate this utility, we monitored macrophage migration to carrageenan-induced sites of acute inflammation in living subjects and visualized the effects of anti-inflammatory agents on this process. Macrophage labeling with RIe-AuNPs did not alter their biological functions such as cell proliferation, phenotype marker expression, or phagocytic activity. In vivo imaging with positron-emission tomography revealed the migration of labeled macrophages to carrageenan-induced inflammation lesions 3 h after transfer, with highest recruitment at 6 h and a slight decline of radioactive signal at 24 h; these findings were highly consistent with the data of a bio-distribution study. Treatment with dexamethasone (an anti-inflammation drug) or GSK5182 (an ERRγ inverse agonist) hindered macrophage recruitment to the inflamed sites. Our findings suggest that a cell tracking strategy utilizing RIe-AuNPs will likely be highly useful in research related to macrophage-related disease and cell-based therapies. PMID:28382164
Teaching the Visual Learner: The Use of Visual Summaries in Marketing Education
ERIC Educational Resources Information Center
Clarke, Irvine, III.; Flaherty, Theresa B.; Yankey, Michael
2006-01-01
Approximately 40% of college students are visual learners, preferring to be taught through pictures, diagrams, flow charts, timelines, films, and demonstrations. Yet marketing instruction remains heavily reliant on presenting content primarily through verbal cues such as written or spoken words. Without visual instruction, some students may be…
A Bilateral Advantage for Storage in Visual Working Memory
ERIC Educational Resources Information Center
Umemoto, Akina; Drew, Trafton; Ester, Edward F.; Awh, Edward
2010-01-01
Various studies have demonstrated enhanced visual processing when information is presented across both visual hemifields rather than in a single hemifield (the "bilateral advantage"). For example, Alvarez and Cavanagh (2005) reported that observers were able to track twice as many moving visual stimuli when the tracked items were presented…
ERP Evidence of Visualization at Early Stages of Visual Processing
ERIC Educational Resources Information Center
Page, Jonathan W.; Duhamel, Paul; Crognale, Michael A.
2011-01-01
Recent neuroimaging research suggests that early visual processing circuits are activated similarly during visualization and perception but have not demonstrated that the cortical activity is similar in character. We found functional equivalency in cortical activity by recording evoked potentials while color and luminance patterns were viewed and…
Physical Models that Provide Guidance in Visualization Deconstruction in an Inorganic Context
ERIC Educational Resources Information Center
Schiltz, Holly K.; Oliver-Hoyo, Maria T.
2012-01-01
Three physical model systems have been developed to help students deconstruct the visualization needed when learning symmetry and group theory. The systems provide students with physical and visual frames of reference to facilitate the complex visualization involved in symmetry concepts. The permanent reflection plane demonstration presents an…
Scalable large format 3D displays
NASA Astrophysics Data System (ADS)
Chang, Nelson L.; Damera-Venkata, Niranjan
2010-02-01
We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.
2011-01-01
Background Fatigue is a common complaint among elementary and junior high school students, and is known to be associated with reduced academic performance. Recently, we demonstrated that fatigue was correlated with decreased cognitive function in these students. However, no studies have identified cognitive predictors of fatigue. Therefore, we attempted to determine independent cognitive predictors of fatigue in these students. Methods We performed a prospective cohort study. One hundred and forty-two elementary and junior high school students without fatigue participated. They completed a variety of paper-and-pencil tests, including list learning and list recall tests, kana pick-out test, semantic fluency test, figure copying test, digit span forward test, and symbol digit modalities test. The participants also completed computerized cognitive tests (tasks A to E on the modified advanced trail making test). These cognitive tests were used to evaluate motor- and information-processing speed, immediate and delayed memory function, auditory and visual attention, divided and switching attention, retrieval of learned material, and spatial construction. One year after the tests, a questionnaire about fatigue (Japanese version of the Chalder Fatigue Scale) was administered to all the participants. Results After the follow-up period, we confirmed 40 cases of fatigue among 118 students. In multivariate logistic regression analyses adjusted for grades and gender, poorer performance on visual information-processing speed and attention tasks was associated with increased risk of fatigue. Conclusions Reduced visual information-processing speed and poor attention are independent predictors of fatigue in elementary and junior high school students. PMID:21672212
Mizuno, Kei; Tanaka, Masaaki; Fukuda, Sanae; Yamano, Emi; Shigihara, Yoshihito; Imai-Matsumura, Kyoko; Watanabe, Yasuyoshi
2011-06-14
Fatigue is a common complaint among elementary and junior high school students, and is known to be associated with reduced academic performance. Recently, we demonstrated that fatigue was correlated with decreased cognitive function in these students. However, no studies have identified cognitive predictors of fatigue. Therefore, we attempted to determine independent cognitive predictors of fatigue in these students. We performed a prospective cohort study. One hundred and forty-two elementary and junior high school students without fatigue participated. They completed a variety of paper-and-pencil tests, including list learning and list recall tests, kana pick-out test, semantic fluency test, figure copying test, digit span forward test, and symbol digit modalities test. The participants also completed computerized cognitive tests (tasks A to E on the modified advanced trail making test). These cognitive tests were used to evaluate motor- and information-processing speed, immediate and delayed memory function, auditory and visual attention, divided and switching attention, retrieval of learned material, and spatial construction. One year after the tests, a questionnaire about fatigue (Japanese version of the Chalder Fatigue Scale) was administered to all the participants. After the follow-up period, we confirmed 40 cases of fatigue among 118 students. In multivariate logistic regression analyses adjusted for grades and gender, poorer performance on visual information-processing speed and attention tasks was associated with increased risk of fatigue. Reduced visual information-processing speed and poor attention are independent predictors of fatigue in elementary and junior high school students. © 2011 Mizuno et al; licensee BioMed Central Ltd.
Murty, Vishnu P.; Tompary, Alexa; Adcock, R. Alison
2017-01-01
Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. SIGNIFICANCE STATEMENT Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. PMID:28100737
Design of an Eye Limiting Resolution Visual System Using Commercial-Off-the-Shelf Equipment
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Giovannetti, Dean P.
2008-01-01
A feasibility study was conducted to determine if a flight simulator with an eye-limiting resolution out-the-window (OTW) visual system could be built using commercial off-the-shelf (COTS) technology and used to evaluate the visual performance of Air Force pilots in an operations context. Results of this study demonstrate that an eye limiting OTW visual system can be built using COTS technology. Further, a series of operationally-based tasks linked to clinical vision tests can be used within the synthetic environment to demonstrate a correlation and quantify the level of correlation between vision and operational aviation performance.
Efficiency of bowel preparation for capsule endoscopy examination: a meta-analysis.
Niv, Yaron
2008-03-07
Good preparation before endoscopic procedures is essential for successful visualization. The small bowel is difficult to evaluate because of its length and complex configuration. A meta-analysis was conducted of studies comparing small bowel visualization by capsule endoscopy with and without preparation. Medical data bases were searched for all studies investigating the preparation for capsule endoscopy of the small bowel up to July 31, 2007. Studies that scored bowel cleanness and measured gastric and small bowel transit time and rate of cecum visualization were included. The primary endpoint was the quality of bowel visualization. The secondary endpoints were transit times and proportion of examinations that demonstrated the cecum, with and without preparation. Meta-analysis was performed with StatDirect Statistical software, version 2.6.1 (http://statsdirect.com). Eight studies met the inclusion criteria. Bowel visualization was scored as "good" in 78% of the examinations performed with preparation and 49% performed without (P<0.0001). There were no significant differences in transit times or in the proportion of examinations that demonstrated the cecum with and without preparation. Capsule endoscopy preparation improves the quality of small bowel visualization, but has no effect on transit times, or demonstration of the cecum.
Efficiency of bowel preparation for capsule endoscopy examination: A meta-analysis
Niv, Yaron
2008-01-01
Good preparation before endoscopic procedures is essential for successful visualization. The small bowel is difficult to evaluate because of its length and complex configuration. A meta-analysis was conducted of studies comparing small bowel visualization by capsule endoscopy with and without preparation. Medical data bases were searched for all studies investigating the preparation for capsule endoscopy of the small bowel up to July 31, 2007. Studies that scored bowel cleanness and measured gastric and small bowel transit time and rate of cecum visualization were included. The primary endpoint was the quality of bowel visualization. The secondary endpoints were transit times and proportion of examinations that demonstrated the cecum, with and without preparation. Meta-analysis was performed with StatDirect Statistical software, version 2.6.1 (http://statsdirect.com). Eight studies met the inclusion criteria. Bowel visualization was scored as “good” in 78% of the examinations performed with preparation and 49% performed without (P < 0.0001). There were no significant differences in transit times or in the proportion of examinations that demonstrated the cecum with and without preparation. Capsule endoscopy preparation improves the quality of small bowel visualization, but has no effect on transit times, or demonstration of the cecum. PMID:18322940
The human visual cortex responds to gene therapy–mediated recovery of retinal function
Ashtari, Manzar; Cyckowski, Laura L.; Monroe, Justin F.; Marshall, Kathleen A.; Chung, Daniel C.; Auricchio, Alberto; Simonelli, Francesca; Leroy, Bart P.; Maguire, Albert M.; Shindler, Kenneth S.; Bennett, Jean
2011-01-01
Leber congenital amaurosis (LCA) is a rare degenerative eye disease, linked to mutations in at least 14 genes. A recent gene therapy trial in patients with LCA2, who have mutations in RPE65, demonstrated that subretinal injection of an adeno-associated virus (AAV) carrying the normal cDNA of that gene (AAV2-hRPE65v2) could markedly improve vision. However, it remains unclear how the visual cortex responds to recovery of retinal function after prolonged sensory deprivation. Here, 3 of the gene therapy trial subjects, treated at ages 8, 9, and 35 years, underwent functional MRI within 2 years of unilateral injection of AAV2-hRPE65v2. All subjects showed increased cortical activation in response to high- and medium-contrast stimuli after exposure to the treated compared with the untreated eye. Furthermore, we observed a correlation between the visual field maps and the distribution of cortical activations for the treated eyes. These data suggest that despite severe and long-term visual impairment, treated LCA2 patients have intact and responsive visual pathways. In addition, these data suggest that gene therapy resulted in not only sustained and improved visual ability, but also enhanced contrast sensitivity. PMID:21606598
A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids
NASA Astrophysics Data System (ADS)
Hwang, Han-Jeong; Ferreria, Valeria Y.; Ulrich, Daniel; Kilic, Tayfun; Chatziliadis, Xenofon; Blankertz, Benjamin; Treder, Matthias
2015-10-01
A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.
Effects of Horizontal Acceleration on Human Visual Acuity and Stereopsis
Horng, Chi-Ting; Hsieh, Yih-Shou; Tsai, Ming-Ling; Chang, Wei-Kang; Yang, Tzu-Hung; Yauan, Chien-Han; Wang, Chih-Hung; Kuo, Wu-Hsien; Wu, Yi-Chang
2015-01-01
The effect of horizontal acceleration on human visual acuity and stereopsis is demonstrated in this study. Twenty participants (mean age 22.6 years) were enrolled in the experiment. Acceleration from two different directions was performed at the Taiwan High-Speed Rail Laboratory. Gx and Gy (< and >0.1 g) were produced on an accelerating platform where the subjects stood. The visual acuity and stereopsis of the right eye were measured before and during the acceleration. Acceleration <0.1 g in the X- or Y-axis did not affect dynamic vision and stereopsis. Vision decreased (mean from 0.02 logMAR to 0.25 logMAR) and stereopsis declined significantly (mean from 40 s to 60.2 s of arc) when Gx > 0.1 g. Visual acuity worsened (mean from 0.02 logMAR to 0.19 logMAR) and poor stereopsis was noted (mean from 40 s to 50.2 s of arc) when Gy > 0.1 g. The effect of acceleration from the X-axis on the visual system was higher than that from the Y-axis. During acceleration, most subjects complained of ocular strain when reading. To our knowledge, this study is the first to report the exact levels of visual function loss during Gx and Gy. PMID:25607601
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
Balasubramaniam, Ramesh
2014-01-01
Sensory information from our eyes, skin and muscles helps guide and correct balance. Less appreciated, however, is that delays in the transmission of sensory information between our eyes, limbs and central nervous system can exceed several 10s of milliseconds. Investigating how these time-delayed sensory signals influence balance control is central to understanding the postural system. Here, we investigate how delayed visual feedback and cognitive performance influence postural control in healthy young and older adults. The task required that participants position their center of pressure (COP) in a fixed target as accurately as possible without visual feedback about their COP location (eyes-open balance), or with artificial time delays imposed on visual COP feedback. On selected trials, the participants also performed a silent arithmetic task (cognitive dual task). We separated COP time series into distinct frequency components using low and high-pass filtering routines. Visual feedback delays affected low frequency postural corrections in young and older adults, with larger increases in postural sway noted for the group of older adults. In comparison, cognitive performance reduced the variability of rapid center of pressure displacements in young adults, but did not alter postural sway in the group of older adults. Our results demonstrate that older adults prioritize vision to control posture. This visual reliance persists even when feedback about the task is delayed by several hundreds of milliseconds. PMID:24614576
Case study of visualizing global user download patterns using Google Earth and NASA World Wind
NASA Astrophysics Data System (ADS)
Zong, Ziliang; Job, Joshua; Zhang, Xuesong; Nijim, Mais; Qin, Xiao
2012-01-01
Geo-visualization is significantly changing the way we view spatial data and discover information. On the one hand, a large number of spatial data are generated every day. On the other hand, these data are not well utilized due to the lack of free and easily used data-visualization tools. This becomes even worse when most of the spatial data remains in the form of plain text such as log files. This paper describes a way of visualizing massive plain-text spatial data at no cost by utilizing Google Earth and NASA World Wind. We illustrate our methods by visualizing over 170,000 global download requests for satellite images maintained by the Earth Resources Observation and Science (EROS) Center of U.S. Geological Survey (USGS). Our visualization results identify the most popular satellite images around the world and discover the global user download patterns. The benefits of this research are: 1. assisting in improving the satellite image downloading services provided by USGS, and 2. providing a proxy for analyzing the "hot spot" areas of research. Most importantly, our methods demonstrate an easy way to geo-visualize massive textual spatial data, which is highly applicable to mining spatially referenced data and information on a wide variety of research domains (e.g., hydrology, agriculture, atmospheric science, natural hazard, and global climate change).
The Role of Research Institutions in Building Visual Content for the Geowall
NASA Astrophysics Data System (ADS)
Newman, R. L.; Kilb, D.; Nayak, A.; Kent, G.
2003-12-01
The advent of the low-cost Geowall (http://www.geowall.org) allows researchers and students to study 3-D geophysical datasets in a collaborative setting. Although 3-D visual objects can aid the understanding of geological principles in the classroom, it is often difficult for staff to develop their own custom visual objects. This is a fundamentally important aspect that research institutions that store large (terabyte) geophysical datasets can address. At Scripps Institution of Oceanography (SIO) we regularly explore gigabyte 3-D visual objects in the SIO Visualization Center (http://siovizcenter.ucsd.edu). Exporting these datasets for use with the Geowall has become routine with current software applications such as IVS's Fledermaus and iView3D. We have developed visualizations that incorporate topographic, bathymetric, and 3-D volumetric crustal datasets to demonstrate fundamental principles of earth science including plate tectonics, seismology, sea-level change, and neotectonics. These visualizations are available for download either via FTP or a website, and have been incorporated into graduate and undergraduate classes at both SIO and the University of California, San Diego. Additionally, staff at the Visualization Center develop content for external schools and colleges such as the Preuss School, a local middle/high school, where a Geowall was installed in February 2003 and curriculum developed for 8th grade students. We have also developed custom visual objects for researchers and educators at diverse education institutions across the globe. At SIO we encourage graduate students and researchers alike to develop visual objects of their datasets through innovative classes and competitions. This not only assists the researchers themselves in understanding their data but also increases the number of visual objects freely available to geoscience educators worldwide.
Change blindness and visual memory: visual representations get rich and act poor.
Varakin, D Alexander; Levin, Daniel T
2006-02-01
Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
NASA Astrophysics Data System (ADS)
Samant, Pratik; Hernandez, Armando; Conklin, Shelby; Xiang, Liangzhong
2017-08-01
We present our results in developing nanoscale photoacoustic tomography (nPAT) for label-free super-resolution imaging in 3D. We have made progress in the development of nPAT, and have acquired our first signal. We have also performed simulations that demonstrate that nPAT is a viable imaging modality for the visualization of malaria infected red blood cells (RBCs). Our results demonstrate that nPAT is both feasible and powerful for the high resolution labelfree imaging of RBCs.
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
Using a Function Generator to Produce Auditory and Visual Demonstrations.
ERIC Educational Resources Information Center
Woods, Charles B.
1998-01-01
Identifies a function generator as an instrument that produces time-varying electrical signals of frequency, wavelength, and amplitude. Sending these signals to a speaker or a light-emitting diode can demonstrate how specific characteristics of auditory or visual stimuli relate to perceptual experiences. Provides specific instructions for using…
Visual Electricity Demonstrator
NASA Astrophysics Data System (ADS)
Lincoln, James
2017-09-01
The Visual Electricity Demonstrator (VED) is a linear diode array that serves as a dynamic alternative to an ammeter. A string of 48 red light-emitting diodes (LEDs) blink one after another to create the illusion of a moving current. Having the current represented visually builds an intuitive and qualitative understanding about what is happening in a circuit. In this article, I describe several activities for this device and explain how using this technology in the classroom can enhance the understanding and appreciation of physics.
Unsupervised Structure Detection in Biomedical Data.
Vogt, Julia E
2015-01-01
A major challenge in computational biology is to find simple representations of high-dimensional data that best reveal the underlying structure. In this work, we present an intuitive and easy-to-implement method based on ranked neighborhood comparisons that detects structure in unsupervised data. The method is based on ordering objects in terms of similarity and on the mutual overlap of nearest neighbors. This basic framework was originally introduced in the field of social network analysis to detect actor communities. We demonstrate that the same ideas can successfully be applied to biomedical data sets in order to reveal complex underlying structure. The algorithm is very efficient and works on distance data directly without requiring a vectorial embedding of data. Comprehensive experiments demonstrate the validity of this approach. Comparisons with state-of-the-art clustering methods show that the presented method outperforms hierarchical methods as well as density based clustering methods and model-based clustering. A further advantage of the method is that it simultaneously provides a visualization of the data. Especially in biomedical applications, the visualization of data can be used as a first pre-processing step when analyzing real world data sets to get an intuition of the underlying data structure. We apply this model to synthetic data as well as to various biomedical data sets which demonstrate the high quality and usefulness of the inferred structure.
Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search
Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.
2017-01-01
In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073
The locus of origin of augmenting and reducing of visual evoked potentials in rat brain.
Siegel, J; Gayle, D; Sharma, A; Driscoll, P
1996-07-01
Humans who are high sensation seekers and cats who demonstrate comparable behavioral traits show increasing amplitudes of the early components of the cortical visual evoked potential (VEP) to increasing intensities of light flash; low sensation seekers show VEP reducing. Roman high-avoidance (RHA) and Roman low-avoidance (RLA) rats have behavioral traits comparable to human and cat high and low sensation seekers, respectively. Previously, we showed that RHA and RLA rats are cortical VEP augmenters and reducers, respectively. The goal of this study was to determine if augmenting-reducing is in fact a property of the visual cortex or if it originates at the lateral geniculate nucleus and is merely reflected in recordings from the cortex. EPs to five flash intensities were recorded from the visual cortex and dorsal lateral geniculate of RHA and RLA rats. As in the previous study, the slope of the first cortical component as a function of flash intensity was greater in the RHA than in the RLA rats. The amplitude of the geniculate component that has a latency shorter than the first cortical component was no different in the two lines of rats. The finding from the cortex confirms the earlier finding of augmenting and reducing in RHA and RLA rats, respectively. The major new finding is that the augmenting-reducing difference recorded at the cortex does not occur at the thalamus, indicating that it is truly a cortical phenomenon.
Timing the impact of literacy on visual processing
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W.; Cohen, Laurent; Dehaene, Stanislas
2014-01-01
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼100–150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing. PMID:25422460
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
Timing the impact of literacy on visual processing.
Pegado, Felipe; Comerlato, Enio; Ventura, Fabricio; Jobert, Antoinette; Nakamura, Kimihiro; Buiatti, Marco; Ventura, Paulo; Dehaene-Lambertz, Ghislaine; Kolinsky, Régine; Morais, José; Braga, Lucia W; Cohen, Laurent; Dehaene, Stanislas
2014-12-09
Learning to read requires the acquisition of an efficient visual procedure for quickly recognizing fine print. Thus, reading practice could induce a perceptual learning effect in early vision. Using functional magnetic resonance imaging (fMRI) in literate and illiterate adults, we previously demonstrated an impact of reading acquisition on both high- and low-level occipitotemporal visual areas, but could not resolve the time course of these effects. To clarify whether literacy affects early vs. late stages of visual processing, we measured event-related potentials to various categories of visual stimuli in healthy adults with variable levels of literacy, including completely illiterate subjects, early-schooled literate subjects, and subjects who learned to read in adulthood (ex-illiterates). The stimuli included written letter strings forming pseudowords, on which literacy is expected to have a major impact, as well as faces, houses, tools, checkerboards, and false fonts. To evaluate the precision with which these stimuli were encoded, we studied repetition effects by presenting the stimuli in pairs composed of repeated, mirrored, or unrelated pictures from the same category. The results indicate that reading ability is correlated with a broad enhancement of early visual processing, including increased repetition suppression, suggesting better exemplar discrimination, and increased mirror discrimination, as early as ∼ 100-150 ms in the left occipitotemporal region. These effects were found with letter strings and false fonts, but also were partially generalized to other visual categories. Thus, learning to read affects the magnitude, precision, and invariance of early visual processing.
NASA Astrophysics Data System (ADS)
Burgert, Oliver; Örn, Veronika; Velichkovsky, Boris M.; Gessat, Michael; Joos, Markus; Strauß, Gero; Tietjen, Christian; Preim, Bernhard; Hertel, Ilka
2007-03-01
Neck dissection is a surgical intervention at which cervical lymph node metastases are removed. Accurate surgical planning is of high importance because wrong judgment of the situation causes severe harm for the patient. Diagnostic perception of radiological images by a surgeon is an acquired skill that can be enhanced by training and experience. To improve accuracy in detecting pathological lymph nodes by newcomers and less experienced professionals, it is essential to understand how surgical experts solve relevant visual and recognition tasks. By using eye tracking and especially the newly-developed attention landscapes visualizations, it could be determined whether visualization options, for example 3D models instead of CT data, help in increasing accuracy and speed of neck dissection planning. Thirteen ORL surgeons with different levels of expertise participated in this study. They inspected different visualizations of 3D models and original CT datasets of patients. Among others, we used scanpath analysis and attention landscapes to interpret the inspection strategies. It was possible to distinguish different patterns of visual exploratory activity. The experienced surgeons exhibited a higher concentration of attention on the limited number of areas of interest and demonstrated less saccadic eye movements indicating a better orientation.
MemAxes: Visualization and Analytics for Characterizing Complex Memory Performance Behaviors.
Gimenez, Alfredo; Gamblin, Todd; Jusufi, Ilir; Bhatele, Abhinav; Schulz, Martin; Bremer, Peer-Timo; Hamann, Bernd
2018-07-01
Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.
Tremblay, Emmanuel; Vannasing, Phetsamone; Roy, Marie-Sylvie; Lefebvre, Francine; Kombate, Damelan; Lassonde, Maryse; Lepore, Franco; McKerral, Michelle; Gallagher, Anne
2014-01-01
In the past decades, multiple studies have been interested in developmental patterns of the visual system in healthy infants. During the first year of life, differential maturational changes have been observed between the Magnocellular (P) and the Parvocellular (P) visual pathways. However, few studies investigated P and M system development in infants born prematurely. The aim of the present study was to characterize P and M system maturational differences between healthy preterm and fullterm infants through a critical period of visual maturation: the first year of life. Using a cross-sectional design, high-density electroencephalogram (EEG) was recorded in 31 healthy preterms and 41 fullterm infants of 3, 6, or 12 months (corrected age for premature babies). Three visual stimulations varying in contrast and spatial frequency were presented to stimulate preferentially the M pathway, the P pathway, or both systems simultaneously during EEG recordings. Results from early visual evoked potentials in response to the stimulation that activates simultaneously both systems revealed longer N1 latencies and smaller P1 amplitudes in preterm infants compared to fullterms. Moreover, preterms showed longer N1 and P1 latencies in response to stimuli assessing the M pathway at 3 months. No differences between preterms and fullterms were found when using the preferential P system stimulation. In order to identify the cerebral generator of each visual response, distributed source analyses were computed in 12-month-old infants using LORETA. Source analysis demonstrated an activation of the parietal dorsal region in fullterm infants, in response to the preferential M pathway, which was not seen in the preterms. Overall, these findings suggest that the Magnocellular pathway development is affected in premature infants. Although our VEP results suggest that premature children overcome, at least partially, the visual developmental delay with time, source analyses reveal abnormal brain activation of the Magnocellular pathway at 12 months of age. PMID:25268226
Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions.
Pigarev, Ivan N; Levichkina, Ekaterina V
2016-01-01
Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.
ERIC Educational Resources Information Center
GROPPER, GEORGE L.
THIS IS A REPORT OF TWO STUDIES IN WHICH PRINCIPLES OF PROGRAMED INSTRUCTION WERE ADAPTED FOR VISUAL PRESENTATIONS. SCIENTIFIC DEMONSTRATIONS WERE PREPARED WITH A VISUAL PROGRAM AND A VERBAL PROGRAM ON--(1) ARCHIMEDES' LAW AND (2) FORCE AND PRESSURE. RESULTS SUGGESTED THAT RESPONSES ARE MORE READILY BROUGHT UNDER THE CONTROL OF VISUAL PRESENTATION…
Beyond sensory images: Object-based representation in the human ventral pathway
Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.
2004-01-01
We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396
Haptic guidance of overt visual attention.
List, Alexandra; Iordanescu, Lucica; Grabowecky, Marcia; Suzuki, Satoru
2014-11-01
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target-a measure of overt visual attention-was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets' shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
Visual perspective taking impairment in children with autistic spectrum disorder.
Hamilton, Antonia F de C; Brindley, Rachel; Frith, Uta
2009-10-01
Evidence from typical development and neuroimaging studies suggests that level 2 visual perspective taking - the knowledge that different people may see the same thing differently at the same time - is a mentalising task. Thus, we would expect children with autism, who fail typical mentalising tasks like false belief, to perform poorly on level 2 visual perspective taking as well. However, prior data on this issue are inconclusive. We re-examined this question, testing a group of 23 young autistic children, aged around 8years with a verbal mental age of around 4years and three groups of typical children (n=60) ranging in age from 4 to 8years on a level 2 visual perspective task and a closely matched mental rotation task. The results demonstrate that autistic children have difficulty with visual perspective taking compared to a task requiring mental rotation, relative to typical children. Furthermore, performance on the level 2 visual perspective taking task correlated with theory of mind performance. These findings resolve discrepancies in previous studies of visual perspective taking in autism, and demonstrate that level 2 visual perspective taking is a mentalising task.
Exploring patterns enriched in a dataset with contrastive principal component analysis.
Abid, Abubakar; Zhang, Martin J; Bagaria, Vivek K; Zou, James
2018-05-30
Visualization and exploration of high-dimensional data is a ubiquitous challenge across disciplines. Widely used techniques such as principal component analysis (PCA) aim to identify dominant trends in one dataset. However, in many settings we have datasets collected under different conditions, e.g., a treatment and a control experiment, and we are interested in visualizing and exploring patterns that are specific to one dataset. This paper proposes a method, contrastive principal component analysis (cPCA), which identifies low-dimensional structures that are enriched in a dataset relative to comparison data. In a wide variety of experiments, we demonstrate that cPCA with a background dataset enables us to visualize dataset-specific patterns missed by PCA and other standard methods. We further provide a geometric interpretation of cPCA and strong mathematical guarantees. An implementation of cPCA is publicly available, and can be used for exploratory data analysis in many applications where PCA is currently used.
A fully organic retinal prosthesis restores vision in a rat model of degenerative blindness
NASA Astrophysics Data System (ADS)
Maya-Vetencourt, José Fernando; Ghezzi, Diego; Antognazza, Maria Rosa; Colombo, Elisabetta; Mete, Maurizio; Feyen, Paul; Desii, Andrea; Buschiazzo, Ambra; di Paolo, Mattia; di Marco, Stefano; Ticconi, Flavia; Emionite, Laura; Shmal, Dmytro; Marini, Cecilia; Donelli, Ilaria; Freddi, Giuliano; Maccarone, Rita; Bisti, Silvia; Sambuceti, Gianmario; Pertile, Grazia; Lanzani, Guglielmo; Benfenati, Fabio
2017-06-01
The degeneration of photoreceptors in the retina is one of the major causes of adult blindness in humans. Unfortunately, no effective clinical treatments exist for the majority of retinal degenerative disorders. Here we report on the fabrication and functional validation of a fully organic prosthesis for long-term in vivo subretinal implantation in the eye of Royal College of Surgeons rats, a widely recognized model of retinitis pigmentosa. Electrophysiological and behavioural analyses reveal a prosthesis-dependent recovery of light sensitivity and visual acuity that persists up to 6-10 months after surgery. The rescue of the visual function is accompanied by an increase in the basal metabolic activity of the primary visual cortex, as demonstrated by positron emission tomography imaging. Our results highlight the possibility of developing a new generation of fully organic, highly biocompatible and functionally autonomous photovoltaic prostheses for subretinal implants to treat degenerative blindness.
Fiacconi, Chris M; Milliken, Bruce
2012-08-01
The purpose of the present study was to highlight the role of location-identity binding mismatches in obscuring explicit awareness of a strong contingency. In a spatial-priming procedure, we introduced a high likelihood of location-repeat trials. Experiments 1, 2a, and 2b demonstrated that participants' explicit awareness of this contingency was heavily influenced by the local match in location-identity bindings. In Experiment 3, we sought to determine why location-identity binding mismatches produce such low levels of contingency awareness. Our results suggest that binding mismatches can interfere substantially with visual-memory performance. We attribute the low levels of contingency awareness to participants' inability to remember the critical location-identity binding in the prime on a trial-to-trial basis. These results imply a close interplay between object files and visual working memory.
Novel approach to multispectral image compression on the Internet
NASA Astrophysics Data System (ADS)
Zhu, Yanqiu; Jin, Jesse S.
2000-10-01
Still image coding techniques such as JPEG have been always applied onto intra-plane images. Coding fidelity is always utilized in measuring the performance of intra-plane coding methods. In many imaging applications, it is more and more necessary to deal with multi-spectral images, such as the color images. In this paper, a novel approach to multi-spectral image compression is proposed by using transformations among planes for further compression of spectral planes. Moreover, a mechanism of introducing human visual system to the transformation is provided for exploiting the psycho visual redundancy. The new technique for multi-spectral image compression, which is designed to be compatible with the JPEG standard, is demonstrated on extracting correlation among planes based on human visual system. A high measure of compactness in the data representation and compression can be seen with the power of the scheme taken into account.
Image analysis for microelectronic retinal prosthesis.
Hallum, L E; Cloherty, S L; Lovell, N H
2008-01-01
By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.
Visualizing the semantic structure in classical music works.
Chan, Wing-Yi; Qu, Huamin; Mak, Wai-Ho
2010-01-01
A major obstacle in the appreciation of classical music is that extensive training is required to understand musical structure and compositional techniques toward comprehending the thoughts behind the musical work. In this paper, we propose an innovative visualization solution to reveal the semantic structure in classical orchestral works such that users can gain insights into musical structure and appreciate the beauty of music. We formulate the semantic structure into macrolevel layer interactions, microlevel theme variations, and macro-micro relationships between themes and layers to abstract the complicated construction of a musical composition. The visualization has been applied with success in understanding some classical music works as supported by highly promising user study results with the general audience and very positive feedback from music students and experts, demonstrating its effectiveness in conveying the sophistication and beauty of classical music to novice users with informative and intuitive displays.
Melodic sound enhances visual awareness of congruent musical notes, but only if you can read music.
Lee, Minyoung; Blake, Randolph; Kim, Sujin; Kim, Chai-Youn
2015-07-07
Predictive influences of auditory information on resolution of visual competition were investigated using music, whose visual symbolic notation is familiar only to those with musical training. Results from two experiments using different experimental paradigms revealed that melodic congruence between what is seen and what is heard impacts perceptual dynamics during binocular rivalry. This bisensory interaction was observed only when the musical score was perceptually dominant, not when it was suppressed from awareness, and it was observed only in people who could read music. Results from two ancillary experiments showed that this effect of congruence cannot be explained by differential patterns of eye movements or by differential response sluggishness associated with congruent score/melody combinations. Taken together, these results demonstrate robust audiovisual interaction based on high-level, symbolic representations and its predictive influence on perceptual dynamics during binocular rivalry.
NASA Astrophysics Data System (ADS)
Gan, Zhixing; Zhou, Weiping; Chen, Zhihui; Wang, Huan; Di, Yunsong; Huang, Shisong
2016-11-01
A diphenylalanine (L-Phe-L-Phe, FF)-carbon nitride composite film is designed and fabricated to visualize the deep ultraviolet (DUV, 245-290 nm) photons. The FF film, composed of diphenylalanine molecules, doped with carbon nitrides shows blue emission under excitation of DUV light, which makes the DUV beam observable. Both Förster resonance energy transfer and cascade photon reabsorption contribute to the conversion of photon energy. First, the FF is excited by the DUV photons. On one hand, the energy transfers to the embedded carbon nitrides through nonradiative dipole-dipole couplings. On the other hand, the 284 nm photons emitted from the FF would further excite the carbon nitrides, which will finally convert to blue fluorescence. Herein, the experimental demonstration of a simple device for the visualization of high DUV fluxes is reported.
Visual Circuit Development Requires Patterned Activity Mediated by Retinal Acetylcholine Receptors
Burbridge, Timothy J.; Xu, Hong-Ping; Ackman, James B.; Ge, Xinxin; Zhang, Yueyi; Ye, Mei-Jun; Zhou, Z. Jimmy; Xu, Jian; Contractor, Anis; Crair, Michael C.
2014-01-01
SUMMARY The elaboration of nascent synaptic connections into highly ordered neural circuits is an integral feature of the developing vertebrate nervous system. In sensory systems, patterned spontaneous activity before the onset of sensation is thought to influence this process, but this conclusion remains controversial largely due to the inherent difficulty recording neural activity in early development. Here, we describe novel genetic and pharmacological manipulations of spontaneous retinal activity, assayed in vivo, that demonstrate a causal link between retinal waves and visual circuit refinement. We also report a de-coupling of downstream activity in retinorecipient regions of the developing brain after retinal wave disruption. Significantly, we show that the spatiotemporal characteristics of retinal waves affect the development of specific visual circuits. These results conclusively establish retinal waves as necessary and instructive for circuit refinement in the developing nervous system and reveal how neural circuits adjust to altered patterns of activity prior to experience. PMID:25466916
Méndez-Balbuena, Ignacio; Huidobro, Nayeli; Silva, Mayte; Flores, Amira; Trenado, Carlos; Quintanar, Luis; Arias-Carrión, Oscar; Kristeva, Rumyana; Manjarrez, Elias
2015-10-01
The present investigation documents the electrophysiological occurrence of multisensory stochastic resonance in the human visual pathway elicited by tactile noise. We define multisensory stochastic resonance of brain evoked potentials as the phenomenon in which an intermediate level of input noise of one sensory modality enhances the brain evoked response of another sensory modality. Here we examined this phenomenon in visual evoked potentials (VEPs) modulated by the addition of tactile noise. Specifically, we examined whether a particular level of mechanical Gaussian noise applied to the index finger can improve the amplitude of the VEP. We compared the amplitude of the positive P100 VEP component between zero noise (ZN), optimal noise (ON), and high mechanical noise (HN). The data disclosed an inverted U-like graph for all the subjects, thus demonstrating the occurrence of a multisensory stochastic resonance in the P100 VEP. Copyright © 2015 the American Physiological Society.
A Neurobehavioral Model of Flexible Spatial Language Behaviors
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224
A fully organic retinal prosthesis restores vision in a rat model of degenerative blindness
Antognazza, Maria Rosa; Colombo, Elisabetta; Mete, Maurizio; Feyen, Paul; Desii, Andrea; Buschiazzo, Ambra; Di Paolo, Mattia; Di Marco, Stefano; Ticconi, Flavia; Emionite, Laura; Shmal, Dmytro; Marini, Cecilia; Donelli, Ilaria; Freddi, Giuliano; Maccarone, Rita; Bisti, Silvia; Sambuceti, Gianmario; Pertile, Grazia; Lanzani, Guglielmo; Benfenati, Fabio
2017-01-01
The degeneration of photoreceptors in the retina is one of the major causes of adult blindness in humans. Unfortunately, no effective clinical treatments exist for the majority of retinal degenerative disorders. Here we report on the fabrication and functional validation of a fully organic prosthesis for long-term in vivo subretinal implantation in the eye of Royal College of Surgeons rats, a widely recognized model of Retinitis pigmentosa. Electrophysiological and behavioral analyses reveal a prosthesis-dependent recovery of light-sensitivity and visual acuity that persists up to 6-10 months after surgery. The rescue of the visual function is accompanied by an increase in the basal metabolic activity of the primary visual cortex, as demonstrated by positron emission tomography imaging. Our results highlight the possibility of developing a new generation of fully organic, highly biocompatible and functionally autonomous photovoltaic prostheses for subretinal implants to treat degenerative blindness. PMID:28250420
Fluorescence imaging of chromosomal DNA using click chemistry
NASA Astrophysics Data System (ADS)
Ishizuka, Takumi; Liu, Hong Shan; Ito, Kenichiro; Xu, Yan
2016-09-01
Chromosome visualization is essential for chromosome analysis and genetic diagnostics. Here, we developed a click chemistry approach for multicolor imaging of chromosomal DNA instead of the traditional dye method. We first demonstrated that the commercially available reagents allow for the multicolor staining of chromosomes. We then prepared two pro-fluorophore moieties that served as light-up reporters to stain chromosomal DNA based on click reaction and visualized the clear chromosomes in multicolor. We applied this strategy in fluorescence in situ hybridization (FISH) and identified, with high sensitivity and specificity, telomere DNA at the end of the chromosome. We further extended this approach to observe several basic stages of cell division. We found that the click reaction enables direct visualization of the chromosome behavior in cell division. These results suggest that the technique can be broadly used for imaging chromosomes and may serve as a new approach for chromosome analysis and genetic diagnostics.
Versatile design and synthesis platform for visualizing genomes with Oligopaint FISH probes
Beliveau, Brian J.; Joyce, Eric F.; Apostolopoulos, Nicholas; Yilmaz, Feyza; Fonseka, Chamith Y.; McCole, Ruth B.; Chang, Yiming; Li, Jin Billy; Senaratne, Tharanga Niroshini; Williams, Benjamin R.; Rouillard, Jean-Marie; Wu, Chao-ting
2012-01-01
A host of observations demonstrating the relationship between nuclear architecture and processes such as gene expression have led to a number of new technologies for interrogating chromosome positioning. Whereas some of these technologies reconstruct intermolecular interactions, others have enhanced our ability to visualize chromosomes in situ. Here, we describe an oligonucleotide- and PCR-based strategy for fluorescence in situ hybridization (FISH) and a bioinformatic platform that enables this technology to be extended to any organism whose genome has been sequenced. The oligonucleotide probes are renewable, highly efficient, and able to robustly label chromosomes in cell culture, fixed tissues, and metaphase spreads. Our method gives researchers precise control over the sequences they target and allows for single and multicolor imaging of regions ranging from tens of kilobases to megabases with the same basic protocol. We anticipate this technology will lead to an enhanced ability to visualize interphase and metaphase chromosomes. PMID:23236188
Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.
Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe
2018-06-02
This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.
Fleming Beattie, Julia; Martin, Roy C; Kana, Rajesh K; Deshpande, Hrishikesh; Lee, Seongtaek; Curé, Joel; Ver Hoef, Lawrence
2017-07-01
While the hippocampus has long been identified as a structure integral to memory, the relationship between morphology and function has yet to be fully explained. We present an analysis of hippocampal dentation, a morphological feature previously unexplored in regard to its relationship with episodic memory. "Hippocampal dentation" in this case refers to surface convolutions, primarily present in the CA1/subiculum on the inferior aspect of the hippocampus. Hippocampal dentation was visualized using ultra-high resolution structural MRI and evaluated using a novel visual rating scale. The degree of hippocampal dentation was found to vary considerably across individuals, and was positively associated with verbal memory recall and visual memory recognition in a sample of 22 healthy adults. This study is the first to characterize the variation in hippocampal dentation in a healthy cohort and to demonstrate its association with aspects of episodic memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
Masters, Michael; Bruner, Emiliano; Queer, Sarah; Traynor, Sarah; Senjem, Jess
2015-01-01
Recent research on the visual system has focused on investigating the relationship among eye (ocular), orbital, and visual cortical anatomy in humans. This issue is relevant in evolutionary and medical fields. In terms of evolution, only in modern humans and Neandertals are the orbits positioned beneath the frontal lobes, with consequent structural constraints. In terms of medicine, such constraints can be associated with minor deformation of the eye, vision defects, and patterns of integration among these features, and in association with the frontal lobes, are important to consider in reconstructive surgery. Further study is therefore necessary to establish how these variables are related, and to what extent ocular size is associated with orbital and cerebral cortical volumes. Relationships among these anatomical components were investigated using magnetic resonance images from a large sample of 83 individuals, which also included each subject’s body height, age, sex, and uncorrected visual acuity score. Occipital and frontal gyri volumes were calculated using two different cortical parcellation tools in order to provide a better understanding of how the eye and orbit vary in relation to visual cortical gyri, and frontal cortical gyri which are not directly related to visual processing. Results indicated that ocular and orbital volumes were weakly correlated, and that eye volume explains only a small proportion of the variance in orbital volume. Ocular and orbital volumes were also found to be equally and, in most cases, more highly correlated with five frontal lobe gyri than with occipital lobe gyri associated with V1, V2, and V3 of the visual cortex. Additionally, after accounting for age and sex variation, the relationship between ocular and total visual cortical volume was no longer statistically significant, but remained significantly related to total frontal lobe volume. The relationship between orbital and visual cortical volumes remained significant for a number of occipital lobe gyri even after accounting for these cofactors, but was again found to be more highly correlated with the frontal cortex than with the occipital cortex. These results indicate that eye volume explains only a small amount of variation in orbital and visual cortical volume, and that the eye and orbit are generally more structurally associated with the frontal lobes than they are functionally associated with the visual cortex of the occipital lobes. Results also demonstrate that these components of the visual system are highly complex and influenced by a multitude of factors in humans. PMID:26250048
Optical Imaging with a High Resolution Microendoscope to Identify Cholesteatoma of the Middle Ear
Levy, Lauren L.; Jiang, Nancy; Smouha, Eric; Richards-Kortum, Rebecca; Sikora, Andrew G.
2013-01-01
Objective High resolution optical imaging is an imaging modality which allows visualization of structural changes in epithelial tissue in real time. Our prior studies using contrast-enhanced microendoscopy to image squamous cell carcinoma in the head and neck demonstrated that the contrast agent, proflavine, has high affinity for keratinized tissue. Thus, high-resolution microendoscopy with proflavine provides a potential mechanism to identify ectopic keratin production, such as that associated with cholesteatoma formation and distinguish between uninvolved mucosa and residual keratin at the time of surgery. Study Design Ex vivo imaging of histopathologically-confirmed samples of cholesteatoma and uninvolved middle-ear epithelium. Methods Seven separate specimens collected from patients who underwent surgical treatment for cholesteatoma were imaged ex vivo with the fiberoptic endoscope after surface staining with proflavine. Following imaging, the specimens were submitted for hematoxylin &eosin staining to allow histopathological correlation. Results Cholesteatoma and surrounding middle ear epithelium have distinct imaging characteristics. Keratin-bearing areas of cholesteatoma lack nuclei and appear as confluent hyperfluorescence, while nuclei are easily visualized in specimens containing normal middle ear epithelium. Hyperfluorescence and loss of cellular detail is the imaging hallmark of keratin allowing for discrimination of cholesteatoma from normal middle ear epithelium. Conclusions This study demonstrates the feasibility of high-resolution optical imaging to discriminate cholesteatoma from uninvolved middle ear mucosa, based on the unique staining properties of keratin. Use of real-time imaging may facilitate more complete extirpation of cholesteatoma by identifying areas of residual disease. PMID:23299781
High Peak Power Microwaves: A Health Hazard
1993-12-01
activity and/or neural transmission"I4 1. For example, we I have reported electromagnetically induced effects, such as corneal endothelial lesions...increased permeability of the iris vasculature, altered retinal electrophysiologic activity (visual function), and histopathological changesr. The...level"]. The electromagnetic environment can also alter the drug’s action, as has been demonstrated with the anticholinesterase drug, physostigmineM
Huang, Xiaojing; Lauer, Kenneth; Clark, Jesse N.; ...
2015-03-13
We report an experimental ptychography measurement performed in fly-scan mode. With a visible-light laser source, we demonstrate a 5-fold reduction of data acquisition time. By including multiple mutually incoherent modes into the incident illumination, high quality images were successfully reconstructed from blurry diffraction patterns. Thus, this approach significantly increases the throughput of ptychography, especially for three-dimensional applications and the visualization of dynamic systems.
NASA Astrophysics Data System (ADS)
Eichenlaub, Jesse B.
2005-03-01
The difference in accommodation and convergence distance experienced when viewing stereoscopic displays has long been recognized as a source of visual discomfort. It is especially problematic in head mounted virtual reality and enhanced reality displays, where images must often be displayed across a large depth range or superimposed on real objects. DTI has demonstrated a novel method of creating stereoscopic images in which the focus and fixation distances are closely matched for all parts of the scene from close distances to infinity. The method is passive in the sense that it does not rely on eye tracking, moving parts, variable focus optics, vibrating optics, or feedback loops. The method uses a rapidly changing illumination pattern in combination with a high speed microdisplay to create cones of light that converge at different distances to form the voxels of a high resolution space filling image. A bench model display was built and a series of visual tests were performed in order to demonstrate the concept and investigate both its capabilities and limitations. Results proved conclusively that real optical images were being formed and that observers had to change their focus to read text or see objects at different distances
NASA Astrophysics Data System (ADS)
Harris, E.
Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.
Cant, Jonathan S; Xu, Yaoda
2015-11-01
Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Intracranial meningioma causing partial amaurosis in a cat.
Goulle, Frédéric; Meige, Frédéric; Durieux, Franck; Malet, Christophe; Toulza, Olivier; Isard, Pierre-François; Peiffer, Robert L; Dulaurent, Thomas
2011-09-01
To describe a case of intracranial meningioma causing visual impairment in a cat, successfully treated by surgery. An adult neutered male domestic cat was referred with a 10-month history of progressive visual impairment and altered behavior. Investigations included physical, ophthalmologic and neurological examinations as well as hematology, serum biochemistry and CT scan of the head. The menace response was absent in the left eye and decreased in the right eye. Electroretinograms were normal on both eyes, as was ophthalmic examination, ruling out an ocular cause and allowing a presumptive diagnosis of partial amaurosis due to a post-retinal lesion. CT scan demonstrated a large sessile extra axial mass along the right parietal bone and thickening of the adjacent bone. Cerebrospinal fluid was not collected because high intracranial pressure represented a risk for brain herniation. A right rostrotentorial craniectomy was performed to remove the tumor. Ten days after surgery, vision was improved, neurological examination was normal and normal behavior was restored. Ten months after surgery, ophthalmological examination showed no visual deficit and CT scan did not reveal any sign of recurrence. Advanced imaging techniques allow veterinarians to detect early cerebral diseases and to provide specific treatment when it is possible. In cases of feline amaurosis due to intracranial meningioma, the vital prognosis is good while the visual prognosis is more uncertain, but recovery of normal vision and normal behavior is possible as demonstrated in the present case. © 2011 American College of Veterinary Ophthalmologists.
Visualizing and Clustering Protein Similarity Networks: Sequences, Structures, and Functions.
Mai, Te-Lun; Hu, Geng-Ming; Chen, Chi-Ming
2016-07-01
Research in the recent decade has demonstrated the usefulness of protein network knowledge in furthering the study of molecular evolution of proteins, understanding the robustness of cells to perturbation, and annotating new protein functions. In this study, we aimed to provide a general clustering approach to visualize the sequence-structure-function relationship of protein networks, and investigate possible causes for inconsistency in the protein classifications based on sequences, structures, and functions. Such visualization of protein networks could facilitate our understanding of the overall relationship among proteins and help researchers comprehend various protein databases. As a demonstration, we clustered 1437 enzymes by their sequences and structures using the minimum span clustering (MSC) method. The general structure of this protein network was delineated at two clustering resolutions, and the second level MSC clustering was found to be highly similar to existing enzyme classifications. The clustering of these enzymes based on sequence, structure, and function information is consistent with each other. For proteases, the Jaccard's similarity coefficient is 0.86 between sequence and function classifications, 0.82 between sequence and structure classifications, and 0.78 between structure and function classifications. From our clustering results, we discussed possible examples of divergent evolution and convergent evolution of enzymes. Our clustering approach provides a panoramic view of the sequence-structure-function network of proteins, helps visualize the relation between related proteins intuitively, and is useful in predicting the structure and function of newly determined protein sequences.
NASA Technical Reports Server (NTRS)
Hof, P. R.; Vogt, B. A.; Bouras, C.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)
1997-01-01
In recent years, the existence of visual variants of Alzheimer's disease characterized by atypical clinical presentation at onset has been increasingly recognized. In many of these cases post-mortem neuropathological assessment revealed that correlations could be established between clinical symptoms and the distribution of neurodegenerative lesions. We have analyzed a series of Alzheimer's disease patients presenting with prominent visual symptomatology as a cardinal sign of the disease. In these cases, a shift in the distribution of pathological lesions was observed such that the primary visual areas and certain visual association areas within the occipito-parieto-temporal junction and posterior cingulate cortex had very high densities of lesions, whereas the prefrontal cortex had fewer lesions than usually observed in Alzheimer's disease. Previous quantitative analyses have demonstrated that in Alzheimer's disease, primary sensory and motor cortical areas are less damaged than the multimodal association areas of the frontal and temporal lobes, as indicated by the laminar and regional distribution patterns of neurofibrillary tangles and senile plaques. The distribution of pathological lesions in the cerebral cortex of Alzheimer's disease cases with visual symptomatology revealed that specific visual association pathways were disrupted, whereas these particular connections are likely to be affected to a less severe degree in the more common form of Alzheimer's disease. These data suggest that in some cases with visual variants of Alzheimer's disease, the neurological symptomatology may be related to the loss of certain components of the cortical visual pathways, as reflected by the particular distribution of the neuropathological markers of the disease.
Rim, Tyler H T; Nam, Jae S; Choi, Moonjung; Lee, Sung C; Lee, Christopher S
2014-06-01
To describe the age, gender specific prevalence and risk factors of visual impairment and blindness in Korea. From 2008 to 2010, a total 14 924 randomly selected national representative participants of the Korea National Health and Nutrition Examination Survey underwent additional ophthalmologic examinations by the Korean Ophthalmologic Society. Best Corrected Distance Visual Acuity was measured using an international standard vision chart based on Snellen scale (Jin's vision chart). Independent risk factors for visual impairment were investigated using multivariate logistic regression analysis. The overall prevalence of visual impairment (≤20/40) of adults 40 years and older was 4.1% (95% CI, 3.6-4.6) based on the better seeing eye. The overall prevalence of blindness (≤20/200) for adults 40 years and older was 0.2% (95% CI, 0.1-0.3). Risk indicators of visual impairment were increasing age, low education status, living in rural area, being unemployed, being without spouse and the absence of private health insurance. The visually impaired were more likely to have eye diseases compared with the normal subjects, and they were less likely to utilize eye care. The prevalence of visual impairment was demonstrated to be higher while that of blindness was similar to previous population studies in Asia or U.S. Sociodemographic disparities are present in the prevalence of visual impairment and more targeted efforts are needed to promote vision screening in high risk groups. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Park, Joonkoo; Chiang, Crystal; Brannon, Elizabeth M.; Woldorff, Marty G.
2014-01-01
Recent functional magnetic resonance imaging research has demonstrated that letters and numbers are preferentially processed in distinct regions and hemispheres in the visual cortex. In particular, the left visual cortex preferentially processes letters compared to numbers, while the right visual cortex preferentially processes numbers compared to letters. Because letters and numbers are cultural inventions and are otherwise physically arbitrary, such a double dissociation is strong evidence for experiential effects on neural architecture. Here, we use the high temporal resolution of event-related potentials (ERPs) to investigate the temporal dynamics of the neural dissociation between letters and numbers. We show that the divergence between ERP traces to letters and numbers emerges very early in processing. Letters evoked greater N1 waves (latencies 140–170 ms) than did numbers over left occipital channels, while numbers evoked greater N1s than letters over the right, suggesting letters and numbers are preferentially processed in opposite hemispheres early in visual encoding. Moreover, strings of letters, but not single letters, elicited greater P2 ERP waves, (starting around 250 ms) than numbers did over the left hemisphere, suggesting that the visual cortex is tuned to selectively process combinations of letters, but not numbers, further along in the visual processing stream. Additionally, the processing of both of these culturally defined stimulus types differentiated from similar but unfamiliar visual stimulus forms (false fonts) even earlier in the processing stream (the P1 at 100 ms). These findings imply major cortical specialization processes within the visual system driven by experience with reading and mathematics. PMID:24669789
Park, Joonkoo; Chiang, Crystal; Brannon, Elizabeth M; Woldorff, Marty G
2014-10-01
Recent fMRI research has demonstrated that letters and numbers are preferentially processed in distinct regions and hemispheres in the visual cortex. In particular, the left visual cortex preferentially processes letters compared with numbers, whereas the right visual cortex preferentially processes numbers compared with letters. Because letters and numbers are cultural inventions and are otherwise physically arbitrary, such a double dissociation is strong evidence for experiential effects on neural architecture. Here, we use the high temporal resolution of ERPs to investigate the temporal dynamics of the neural dissociation between letters and numbers. We show that the divergence between ERP traces to letters and numbers emerges very early in processing. Letters evoked greater N1 waves (latencies 140-170 msec) than did numbers over left occipital channels, whereas numbers evoked greater N1s than letters over the right, suggesting letters and numbers are preferentially processed in opposite hemispheres early in visual encoding. Moreover, strings of letters, but not single letters, elicited greater P2 ERP waves (starting around 250 msec) than numbers did over the left hemisphere, suggesting that the visual cortex is tuned to selectively process combinations of letters, but not numbers, further along in the visual processing stream. Additionally, the processing of both of these culturally defined stimulus types differentiated from similar but unfamiliar visual stimulus forms (false fonts) even earlier in the processing stream (the P1 at 100 msec). These findings imply major cortical specialization processes within the visual system driven by experience with reading and mathematics.
Visual imagery and functional connectivity in blindness: a single-case study
Boucard, Christine C.; Rauschecker, Josef P.; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark
2016-01-01
We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input. PMID:25690326
Visual imagery and functional connectivity in blindness: a single-case study.
Boucard, Christine C; Rauschecker, Josef P; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark
2016-05-01
We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input.
High-resolution imaging of the supercritical antisolvent process
NASA Astrophysics Data System (ADS)
Bell, Philip W.; Stephens, Amendi P.; Roberts, Christopher B.; Duke, Steve R.
2005-06-01
A high-magnification and high-resolution imaging technique was developed for the supercritical fluid antisolvent (SAS) precipitation process. Visualizations of the jet injection, flow patterns, droplets, and particles were obtained in a high-pressure vessel for polylactic acid and budesonide precipitation in supercritical CO2. The results show two regimes for particle production: one where turbulent mixing occurs in gas-like plumes, and another where distinct droplets were observed in the injection. Images are presented to demonstrate the capabilities of the method for examining particle formation theories and for understanding the underlying fluid mechanics, thermodynamics, and mass transport in the SAS process.
The pedagogical toolbox: computer-generated visual displays, classroom demonstration, and lecture.
Bockoven, Jerry
2004-06-01
This analogue study compared the effectiveness of computer-generated visual displays, classroom demonstration, and traditional lecture as methods of instruction used to teach neuronal structure and processes. Randomly assigned 116 undergraduate students participated in 1 of 3 classrooms in which they experienced the same content but different teaching approaches presented by 3 different student-instructors. Then participants completed a survey of their subjective reactions and a measure of factual information designed to evaluate objective learning outcomes. Participants repeated this factual measure 5 wk. later. Results call into question the use of classroom demonstration methods as well as the trend towards devaluing traditional lecture in favor of computer-generated visual display.
2016-01-01
Protein metabolism, consisting of both synthesis and degradation, is highly complex, playing an indispensable regulatory role throughout physiological and pathological processes. Over recent decades, extensive efforts, using approaches such as autoradiography, mass spectrometry, and fluorescence microscopy, have been devoted to the study of protein metabolism. However, noninvasive and global visualization of protein metabolism has proven to be highly challenging, especially in live systems. Recently, stimulated Raman scattering (SRS) microscopy coupled with metabolic labeling of deuterated amino acids (D-AAs) was demonstrated for use in imaging newly synthesized proteins in cultured cell lines. Herein, we significantly generalize this notion to develop a comprehensive labeling and imaging platform for live visualization of complex protein metabolism, including synthesis, degradation, and pulse–chase analysis of two temporally defined populations. First, the deuterium labeling efficiency was optimized, allowing time-lapse imaging of protein synthesis dynamics within individual live cells with high spatial–temporal resolution. Second, by tracking the methyl group (CH3) distribution attributed to pre-existing proteins, this platform also enables us to map protein degradation inside live cells. Third, using two subsets of structurally and spectroscopically distinct D-AAs, we achieved two-color pulse–chase imaging, as demonstrated by observing aggregate formation of mutant hungtingtin proteins. Finally, going beyond simple cell lines, we demonstrated the imaging ability of protein synthesis in brain tissues, zebrafish, and mice in vivo. Hence, the presented labeling and imaging platform would be a valuable tool to study complex protein metabolism with high sensitivity, resolution, and biocompatibility for a broad spectrum of systems ranging from cells to model animals and possibly to humans. PMID:25560305
Baweja, Harsimran S.; Patel, Bhavini K.; Neto, Osmar P.; Christou, Evangelos A.
2011-01-01
The purpose of this study was to compare force variability and the neural activation of the agonist muscle during constant isometric contractions at different force levels when the amplitude of respiration and visual feedback were varied. Twenty young adults (20–32 years, 10 men and 10 women) were instructed to accurately match a target force at 15 and 50% of their maximal voluntary contraction (MVC) with abduction of the index finger while controlling their respiration at different amplitudes (85, 100 and 125% normal) in the presence and absence of visual feedback. Each trial lasted 22 s and visual feedback was removed from 8–12 to 16–20 s. Each subject performed 3 trials with each respiratory condition at each force level. Force variability was quantified as the standard deviation of the detrended force data. The neural activation of the first dorsal interosseus (FDI) was measured with bipolar surface electrodes placed distal to the innervation zone. Relative to normal respiration, force variability increased significantly only during high-amplitude respiration (~63%). The increase in force variability from normal- to high-amplitude respiration was strongly associated with amplified force oscillations from 0–3 Hz (R2 ranged from .68 – .84; p < .001). Furthermore, the increase in force variability was exacerbated in the presence of visual feedback at 50% MVC (vision vs. no-vision: .97 vs. .87 N) and was strongly associated with amplified force oscillations from 0–1 Hz (R2 = .82) and weakly associated with greater power from 12–30 Hz (R2 = .24) in the EMG of the agonist muscle. Our findings demonstrate that high-amplitude respiration and visual feedback of force interact and amplify force variability in young adults during moderate levels of effort. PMID:21546109
Perceptual organization and visual attention.
Kimchi, Ruth
2009-01-01
Perceptual organization--the processes structuring visual information into coherent units--and visual attention--the processes by which some visual information in a scene is selected--are crucial for the perception of our visual environment and to visuomotor behavior. Recent research points to important relations between attentional and organizational processes. Several studies demonstrated that perceptual organization constrains attentional selectivity, and other studies suggest that attention can also constrain perceptual organization. In this chapter I focus on two aspects of the relationship between perceptual organization and attention. The first addresses the question of whether or not perceptual organization can take place without attention. I present findings demonstrating that some forms of grouping and figure-ground segmentation can occur without attention, whereas others require controlled attentional processing, depending on the processes involved and the conditions prevailing for each process. These findings challenge the traditional view, which assumes that perceptual organization is a unitary entity that operates preattentively. The second issue addresses the question of whether perceptual organization can affect the automatic deployment of attention. I present findings showing that the mere organization of some elements in the visual field by Gestalt factors into a coherent perceptual unit (an "object"), with no abrupt onset or any other unique transient, can capture attention automatically in a stimulus-driven manner. Taken together, the findings discussed in this chapter demonstrate the multifaceted, interactive relations between perceptual organization and visual attention.
Graphical approach for multiple values logic minimization
NASA Astrophysics Data System (ADS)
Awwal, Abdul Ahad S.; Iftekharuddin, Khan M.
1999-03-01
Multiple valued logic (MVL) is sought for designing high complexity, highly compact, parallel digital circuits. However, the practical realization of an MVL-based system is dependent on optimization of cost, which directly affects the optical setup. We propose a minimization technique for MVL logic optimization based on graphical visualization, such as a Karnaugh map. The proposed method is utilized to solve signed-digit binary and trinary logic minimization problems. The usefulness of the minimization technique is demonstrated for the optical implementation of MVL circuits.
Familiar route loyalty implies visual pilotage in the homing pigeon
Biro, Dora; Meade, Jessica; Guilford, Tim
2004-01-01
Wide-ranging animals, such as birds, regularly traverse large areas of the landscape efficiently in the course of their local movement patterns, which raises fundamental questions about the cognitive mechanisms involved. By using precision global-positioning-system loggers, we show that homing pigeons (Columba livia) not only come to rely on highly stereotyped yet surprisingly inefficient routes within the local area but are attracted directly back to their individually preferred routes even when released from novel sites off-route. This precise route loyalty demonstrates a reliance on familiar landmarks throughout the flight, which was unexpected under current models of avian navigation. We discuss how visual landmarks may be encoded as waypoints within familiar route maps. PMID:15572457
Sounds of silence: How to animate virtual worlds with sound
NASA Technical Reports Server (NTRS)
Astheimer, Peter
1993-01-01
Sounds are an integral and sometimes annoying part of our daily life. Virtual worlds which imitate natural environments gain a lot of authenticity from fast, high quality visualization combined with sound effects. Sounds help to increase the degree of immersion for human dwellers in imaginary worlds significantly. The virtual reality toolkit of IGD (Institute for Computer Graphics) features a broad range of standard visual and advanced real-time audio components which interpret an object-oriented definition of the scene. The virtual reality system 'Virtual Design' realized with the toolkit enables the designer of virtual worlds to create a true audiovisual environment. Several examples on video demonstrate the usage of the audio features in Virtual Design.
Association of Axial Length With Risk of Uncorrectable Visual Impairment for Europeans With Myopia.
Tideman, J Willem L; Snabel, Margaretha C C; Tedja, Milly S; van Rijn, Gwyneth A; Wong, King T; Kuijpers, Robert W A M; Vingerling, Johannes R; Hofman, Albert; Buitendijk, Gabriëlle H S; Keunen, Jan E E; Boon, Camiel J F; Geerards, Annette J M; Luyten, Gregorius P M; Verhoeven, Virginie J M; Klaver, Caroline C W
2016-12-01
Myopia (ie, nearsightedness) is becoming the most common eye disorder to cause blindness in younger persons in many parts of the world. Visual impairment due to myopia is associated with structural changes of the retina and the globe because of elongation of the eye axis. How axial length-a sum of the anterior chamber depth, lens thickness, and vitreous chamber depth-and myopia relate to the development of visual impairment over time is unknown. To evaluate the association between axial length, spherical equivalent, and the risk of visual impairment and to make projections of visual impairment for regions with high prevalence rates. This cross-sectional study uses population-based data from the Rotterdam Study I (1990 to 1993), II (2000 to 2002), and III (2006 to 2008) and the Erasmus Rucphen Family Study (2002 to 2005) as well as case-control data from the Myopia Study (2010 to 2012) from the Netherlands. In total, 15 404 individuals with data on spherical equivalent and 9074 individuals with data on axial length were included in the study; right eyes were used for analyses. Data were analyzed from September 2014 to May 2016. Visual impairment and blindness (defined according to the World Health Organization criteria as a visual acuity less than 0.3) and predicted rates of visual impairment specifically for persons with myopia. Of the 15 693 individuals included in this study, the mean (SD) age was 61.3 (11.4) years, and 8961 (57.1%) were female. Axial length ranged from 15.3 to 37.8 mm; 819 individuals had an axial length of 26 mm or greater. Spherical equivalent ranged from -25 to +14 diopters; 796 persons had high myopia (ie, a spherical equivalent of -6 diopters or less). The prevalence of visual impairment varied from 1.0% to 4.1% in the population-based studies, was 5.4% in the Myopia Study, and was 0.3% in controls. The prevalence of visual impairment rose with increasing axial length and spherical equivalent, with a cumulative incidence (SE) of visual impairment of 3.8% (1.3) for participants aged 75 years with an axial length of 24 to less than 26 mm and greater than 90% (8.1) with an axial length of 30 mm or greater. The cumulative risk (SE) of visual impairment was 5.7% (1.3) for participants aged 60 years and 39% (4.9) for those aged 75 years with a spherical equivalent of -6 diopters or less. Projections of these data suggest that visual impairment will increase 7- to 13-fold by 2055 in high-risk areas. This study demonstrated that visual impairment is associated with axial length and spherical equivalent and may be unavoidable at the most extreme values in this population. Developing strategies to prevent the development of myopia and its complications could help to avoid an increase of visual impairment in the working-age population.
Hydroglyphics: Demonstration of Selective Wetting on Hydrophilic and Hydrophobic Surfaces
ERIC Educational Resources Information Center
Kim, Philseok; Alvarenga, Jack; Aizenberg, Joanna; Sleeper, Raymond S.
2013-01-01
A visual demonstration of the difference between hydrophilic and hydrophobic surfaces has been developed. It involves placing a shadow mask on an optically clear hydrophobic plastic dish, corona treating the surface with a modified Tesla coil, removing the shadow mask, and visualizing the otherwise invisible message or pattern by applying water,…
Visualizing Nanocatalysts in Action from Color Change Reaction to Magnetic Recycling and Reuse
ERIC Educational Resources Information Center
Hudson, Reuben; Bishop, Alexandra; Glaisher, Samuel; Katz, Jeffrey L.
2015-01-01
A demonstration to highlight the utility and ease of handling environmentally benign magnetically recoverable nanoparticle catalysts is described. The demonstration offers two powerful visuals. The first is a color change oxidation of tetramethylbenzidine by hydrogen peroxide catalyzed by Fe[subscript 3]O[subscript 4] nanoparticles. The second,…
Proline and COMT Status Affect Visual Connectivity in Children with 22q11.2 Deletion Syndrome
Magnée, Maurice J. C. M.; Lamme, Victor A. F.; de Sain-van der Velden, Monique G. M.; Vorstman, Jacob A. S.; Kemner, Chantal
2011-01-01
Background Individuals with the 22q11.2 deletion syndrome (22q11DS) are at increased risk for schizophrenia and Autism Spectrum Disorders (ASDs). Given the prevalence of visual processing deficits in these three disorders, a causal relationship between genes in the deleted region of chromosome 22 and visual processing is likely. Therefore, 22q11DS may represent a unique model to understand the neurobiology of visual processing deficits related with ASD and psychosis. Methodology We measured Event-Related Potentials (ERPs) during a texture segregation task in 58 children with 22q11DS and 100 age-matched controls. The C1 component was used to index afferent activity of visual cortex area V1; the texture negativity wave provided a measure for the integrity of recurrent connections in the visual cortical system. COMT genotype and plasma proline levels were assessed in 22q11DS individuals. Principal Findings Children with 22q11DS showed enhanced feedforward activity starting from 70 ms after visual presentation. ERP activity related to visual feedback activity was reduced in the 22q11DS group, which was seen as less texture negativity around 150 ms post presentation. Within the 22q11DS group we further demonstrated an association between high plasma proline levels and aberrant feedback/feedforward ratios, which was moderated by the COMT 158 genotype. Conclusions These findings confirm the presence of early visual processing deficits in 22q11DS. We discuss these in terms of dysfunctional synaptic plasticity in early visual processing areas, possibly associated with deviant dopaminergic and glutamatergic transmission. As such, our findings may serve as a promising biomarker related to the development of schizophrenia among 22q11DS individuals. PMID:21998713
Feature-Specific Organization of Feedback Pathways in Mouse Visual Cortex.
Huh, Carey Y L; Peach, John P; Bennett, Corbett; Vega, Roxana M; Hestrin, Shaul
2018-01-08
Higher and lower cortical areas in the visual hierarchy are reciprocally connected [1]. Although much is known about how feedforward pathways shape receptive field properties of visual neurons, relatively little is known about the role of feedback pathways in visual processing. Feedback pathways are thought to carry top-down signals, including information about context (e.g., figure-ground segmentation and surround suppression) [2-5], and feedback has been demonstrated to sharpen orientation tuning of neurons in the primary visual cortex (V1) [6, 7]. However, the response characteristics of feedback neurons themselves and how feedback shapes V1 neurons' tuning for other features, such as spatial frequency (SF), remain largely unknown. Here, using a retrograde virus, targeted electrophysiological recordings, and optogenetic manipulations, we show that putatively feedback neurons in layer 5 (hereafter "L5 feedback") in higher visual areas, AL (anterolateral area) and PM (posteromedial area), display distinct visual properties in awake head-fixed mice. AL L5 feedback neurons prefer significantly lower SF (mean: 0.04 cycles per degree [cpd]) compared to PM L5 feedback neurons (0.15 cpd). Importantly, silencing AL L5 feedback reduced visual responses of V1 neurons preferring low SF (mean change in firing rate: -8.0%), whereas silencing PM L5 feedback suppressed responses of high-SF-preferring V1 neurons (-20.4%). These findings suggest that feedback connections from higher visual areas convey distinctly tuned visual inputs to V1 that serve to boost V1 neurons' responses to SF. Such like-to-like functional organization may represent an important feature of feedback pathways in sensory systems and in the nervous system in general. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deng, Yanjia; Shi, Lin; Lei, Yi; Liang, Peipeng; Li, Kuncheng; Chu, Winnie C. W.; Wang, Defeng
2016-01-01
The human cortical regions for processing high-level visual (HLV) functions of different categories remain ambiguous, especially in terms of their conjunctions and specifications. Moreover, the neurobiology of declined HLV functions in patients with Alzheimer's disease (AD) has not been fully investigated. This study provides a functionally sorted overview of HLV cortices for processing “what” and “where” visual perceptions and it investigates their atrophy in AD and MCI patients. Based upon activation likelihood estimation (ALE), brain regions responsible for processing five categories of visual perceptions included in “what” and “where” visions (i.e., object, face, word, motion, and spatial visions) were analyzed, and subsequent contrast analyses were performed to show regions with conjunctive and specific activations for processing these visual functions. Next, based on the resulting ALE maps, the atrophy of HLV cortices in AD and MCI patients was evaluated using voxel-based morphometry. Our ALE results showed brain regions for processing visual perception across the five categories, as well as areas of conjunction and specification. Our comparisons of gray matter (GM) volume demonstrated atrophy of three “where” visual cortices in late MCI group and extensive atrophy of HLV cortices (25 regions in both “what” and “where” visual cortices) in AD group. In addition, the GM volume of atrophied visual cortices in AD and MCI subjects was found to be correlated to the deterioration of overall cognitive status and to the cognitive performances related to memory, execution, and object recognition functions. In summary, these findings may add to our understanding of HLV network organization and of the evolution of visual perceptual dysfunction in AD as the disease progresses. PMID:27445770
Zhao, Jing; Liu, Menglian; Liu, Hanlong; Huang, Chen
2018-02-16
It has been suggested that orthographic transparency and age changes may affect the relationship between visual attention span (VAS) deficit and reading difficulty. The present study explored the developmental trend of VAS in children with developmental dyslexia (DD) in Chinese, a logographic language with a deep orthography. Fifty-seven Chinese children with DD and fifty-four age-matched normal readers participated. The visual 1-back task was adopted to examine VAS. Phonological and morphological awareness tests, and reading tests in single-character and sentence levels were used for reading skill measurements. Results showed that only high graders with dyslexia exhibited lower accuracy than the controls in the VAS task, revealing an increased VAS deficit with development in the dyslexics. Moreover, the developmental trajectory analyses demonstrated that the dyslexics seemed to exhibit an atypical but not delayed pattern in their VAS development as compared to the controls. A correlation analysis indicated that VAS was only associated with morphological awareness for dyslexic readers in high grades. Further regression analysis showed that VAS skills and morphological awareness made separate and significant contributions to single-character reading for high grader with dyslexia. These findings suggested a developmental increasing trend in the relationship between VAS skills and reading (dis)ability in Chinese.
Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A
2015-01-01
Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.
Kim, Seung-Jae; Ogilvie, Mitchell; Shimabukuro, Nathan; Stewart, Trevor; Shin, Joon-Ho
2015-09-01
Visual feedback can be used during gait rehabilitation to improve the efficacy of training. We presented a paradigm called visual feedback distortion; the visual representation of step length was manipulated during treadmill walking. Our prior work demonstrated that an implicit distortion of visual feedback of step length entails an unintentional adaptive process in the subjects' spatial gait pattern. Here, we investigated whether the implicit visual feedback distortion, versus conscious correction, promotes efficient locomotor adaptation that relates to greater retention of a task. Thirteen healthy subjects were studied under two conditions: (1) we implicitly distorted the visual representation of their gait symmetry over 14 min, and (2) with help of visual feedback, subjects were told to walk on the treadmill with the intent of attaining the gait asymmetry observed during the first implicit trial. After adaptation, the visual feedback was removed while subjects continued walking normally. Over this 6-min period, retention of preserved asymmetric pattern was assessed. We found that there was a greater retention rate during the implicit distortion trial than that of the visually guided conscious modulation trial. This study highlights the important role of implicit learning in the context of gait rehabilitation by demonstrating that training with implicit visual feedback distortion may produce longer lasting effects. This suggests that using visual feedback distortion could improve the effectiveness of treadmill rehabilitation processes by influencing the retention of motor skills.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.
2012-10-02
An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSImore » QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.« less
NASA Astrophysics Data System (ADS)
Hu, He; Arena, Francesca; Gianolio, Eliana; Boffa, Cinzia; di Gregorio, Enza; Stefania, Rachele; Orio, Laura; Baroni, Simona; Aime, Silvio
2016-03-01
A novel fluorescein/Gd-DOTAGA containing nanoprobe for the visualization of tumors by optical and Magnetic Resonance Imaging (MRI) is reported herein. It is based on the functionalization of the surface of small mesoporous silica nanoparticles (MSNs) (~30 nm) with the arginine-glycine-aspartic (RGD) moieties, which are known to target αvβ3 integrin receptors overexpressed in several tumor cells. The obtained nanoprobe (Gd-MSNs-RGD) displays good stability, tolerability and high relaxivity (37.6 mM-1 s-1 at 21.5 MHz). After a preliminary evaluation of their cytotoxicity and targeting capability toward U87MG cells by in vitro fluorescence and MR imaging, the nanoprobes were tested in vivo by T1-weighted MR imaging of xenografted murine tumor models. The obtained results demonstrated that the Gd-MSNs-RGD nanoprobes are good reporters both in vitro and in vivo for the MR-visualization of tumor cells overexpressing αvβ3 integrin receptors.A novel fluorescein/Gd-DOTAGA containing nanoprobe for the visualization of tumors by optical and Magnetic Resonance Imaging (MRI) is reported herein. It is based on the functionalization of the surface of small mesoporous silica nanoparticles (MSNs) (~30 nm) with the arginine-glycine-aspartic (RGD) moieties, which are known to target αvβ3 integrin receptors overexpressed in several tumor cells. The obtained nanoprobe (Gd-MSNs-RGD) displays good stability, tolerability and high relaxivity (37.6 mM-1 s-1 at 21.5 MHz). After a preliminary evaluation of their cytotoxicity and targeting capability toward U87MG cells by in vitro fluorescence and MR imaging, the nanoprobes were tested in vivo by T1-weighted MR imaging of xenografted murine tumor models. The obtained results demonstrated that the Gd-MSNs-RGD nanoprobes are good reporters both in vitro and in vivo for the MR-visualization of tumor cells overexpressing αvβ3 integrin receptors. Electronic supplementary information (ESI) available: Absorption and emission spectra, energy dispersive X-ray analysis (EDXA) and XPS spectra, TGA, zeta-potential and the molecular structures of the Gd-complexes. See DOI: 10.1039/c5nr08878j
Gender-specific effects of emotional modulation on visual temporal order thresholds.
Liang, Wei; Zhang, Jiyuan; Bao, Yan
2015-09-01
Emotions affect temporal information processing in the low-frequency time window of a few seconds, but little is known about their effect in the high-frequency domain of some tens of milliseconds. The present study aims to investigate whether negative and positive emotional states influence the ability to discriminate the temporal order of visual stimuli, and whether gender plays a role in temporal processing. Due to the hemispheric lateralization of emotion, a hemispheric asymmetry between the left and the right visual field might be expected. Using a block design, subjects were primed with neutral, negative and positive emotional pictures before performing temporal order judgment tasks. Results showed that male subjects exhibited similarly reduced order thresholds under negative and positive emotional states, while female subjects demonstrated increased threshold under positive emotional state and reduced threshold under negative emotional state. Besides, emotions influenced female subjects more intensely than male subjects, and no hemispheric lateralization was observed. These observations indicate an influence of emotional states on temporal order processing of visual stimuli, and they suggest a gender difference, which is possibly associated with a different emotional stability.
van Laar, Peter Jan; Oterdoom, D L Marinus; Ter Horst, Gert J; van Hulzen, Arjen L J; de Graaf, Eva K L; Hoogduin, Hans; Meiners, Linda C; van Dijk, J Marc C
2016-09-01
In deep brain stimulation (DBS), accurate placement of the lead is critical. Target definition is highly dependent on visual recognition on magnetic resonance imaging (MRI). We prospectively investigated whether the 7-T MRI enabled better visualization of targets and led to better placement of leads compared with the 1.5-T and the 3-T MRI. Three patients with PD (mean, 55 years) were scanned on 1.5-, 3-, and 7-T MRI before surgery. Tissue contrast and signal-to-noise ratio were measured. Target coordinates were noted on MRI and during surgery. Differences were analyzed with post-hoc analysis of variance. The 7-T MRI demonstrated a significant improvement in tissue visualization (P < 0.005) and signal-to-noise ratio (P < 0.005). However, no difference in the target coordinates was found between the 7-T and the 3-T MRI. Although the 7-T MRI enables a significant better visualization of the DBS target in patients with PD, we found no clinical benefit for the placement of the DBS leads. Copyright © 2016 Elsevier Inc. All rights reserved.
Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.
Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling
2015-11-01
In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.
Using Tablet for visual exploration of second-generation sequencing data.
Milne, Iain; Stephen, Gordon; Bayer, Micha; Cock, Peter J A; Pritchard, Leighton; Cardle, Linda; Shaw, Paul D; Marshall, David
2013-03-01
The advent of second-generation sequencing (2GS) has provided a range of significant new challenges for the visualization of sequence assemblies. These include the large volume of data being generated, short-read lengths and different data types and data formats associated with the diversity of new sequencing technologies. This article illustrates how Tablet-a high-performance graphical viewer for visualization of 2GS assemblies and read mappings-plays an important role in the analysis of these data. We present Tablet, and through a selection of use cases, demonstrate its value in quality assurance and scientific discovery, through features such as whole-reference coverage overviews, variant highlighting, paired-end read mark-up, GFF3-based feature tracks and protein translations. We discuss the computing and visualization techniques utilized to provide a rich and responsive graphical environment that enables users to view a range of file formats with ease. Tablet installers can be freely downloaded from http://bioinf.hutton.ac.uk/tablet in 32 or 64-bit versions for Windows, OS X, Linux or Solaris. For further details on the Tablet, contact tablet@hutton.ac.uk.
Ranked centroid projection: a data visualization approach with self-organizing maps.
Yen, G G; Wu, Z
2008-02-01
The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.
Longitudinal changes in the visual field and optic disc in glaucoma.
Artes, Paul H; Chauhan, Balwantray C
2005-05-01
The nature and mode of functional and structural progression in open-angle glaucoma is a subject of considerable debate in the literature. While there is a traditionally held viewpoint that optic disc and/or nerve fibre layer changes precede visual field changes, there is surprisingly little published evidence from well-controlled prospective studies in this area, specifically with modern perimetric and imaging techniques. In this paper, we report on clinical data from both glaucoma patients and normal controls collected prospectively over several years, to address the relationship between visual field and optic disc changes in glaucoma using standard automated perimetry (SAP), high-pass resolution perimetry (HRP) and confocal scanning laser tomography (CSLT). We use several methods of analysis of longitudinal data and describe a new technique called "evidence of change" analysis which facilitates comparison between different tests. We demonstrate that current clinical indicators of visual function (SAP and HRP) and measures of optic disc structure (CSLT) provide largely independent measures of progression. We discuss the reasons for these findings as well as several methodological issues that pose challenges to elucidating the true structure-function relationship in glaucoma.
High-Performance Computing and Visualization | Energy Systems Integration
Facility | NREL High-Performance Computing and Visualization High-Performance Computing and Visualization High-performance computing (HPC) and visualization at NREL propel technology innovation as a . Capabilities High-Performance Computing NREL is home to Peregrine-the largest high-performance computing system
Earth Science Multimedia Theater
NASA Technical Reports Server (NTRS)
Hasler, A. F.
1998-01-01
The presentation will begin with the latest 1998 NASA Earth Science Vision for the next 25 years. A compilation of the 10 days of animations of Hurricane Georges which were supplied daily on NASA to Network television will be shown. NASA's visualizations of Hurricane Bonnie which appeared in the Sept 7 1998 issue of TIME magazine. Highlights will be shown from the NASA hurricane visualization resource video tape that has been used repeatedly this season on network TV. Results will be presented from a new paper on automatic wind measurements in Hurricane Luis from 1 -min GOES images that will appear in the October BAMS. The visualizations are produced by the Goddard Visualization & Analysis Laboratory, and Scientific Visualization Studio, as well as other Goddard and NASA groups using NASA, NOAA, ESA, and NASDA Earth science datasets. Visualizations will be shown from the "Digital-HyperRes-Panorama" Earth Science ETheater'98 recently presented in Tokyo, Paris and Phoenix. The presentation in Paris used a SGI/CRAY Onyx Infinite Reality Super Graphics Workstation at 2560 X 1024 resolution with dual synchronized video Epson 71 00 projectors on a 20ft wide screen. Earth Science Electronic Theater '999 is being prepared for a December 1 st showing at NASA HQ in Washington and January presentation at the AMS meetings in Dallas. The 1999 version of the Etheater will be triple wide with at resolution of 3840 X 1024 on a 60 ft wide screen. Visualizations will also be featured from the new Earth Today Exhibit which was opened by Vice President Gore on July 2, 1998 at the Smithsonian Air & Space Museum in Washington, as well as those presented for possible use at the American Museum of Natural History (NYC), Disney EPCOT, and other venues. New methods are demonstrated for visualizing, interpreting, comparing, organizing and analyzing immense Hyperimage remote sensing datasets and three dimensional numerical model results. We call the data from many new Earth sensing satellites, Hyperimage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing Hyperimage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed Image SpreadSheet (DISS).
Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow.
Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong
2015-11-13
In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction.
Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow
NASA Astrophysics Data System (ADS)
Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong
2015-11-01
In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction.
NASA Astrophysics Data System (ADS)
Liu, Yan; Shen, Yuecheng; Ruan, Haowen; Brodie, Frank L.; Wong, Terence T. W.; Yang, Changhuei; Wang, Lihong V.
2018-01-01
Normal development of the visual system in infants relies on clear images being projected onto the retina, which can be disrupted by lens opacity caused by congenital cataract. This disruption, if uncorrected in early life, results in amblyopia (permanently decreased vision even after removal of the cataract). Doctors are able to prevent amblyopia by removing the cataract during the first several weeks of life, but this surgery risks a host of complications, which can be equally visually disabling. Here, we investigated the feasibility of focusing light noninvasively through highly scattering cataractous lenses to stimulate the retina, thereby preventing amblyopia. This approach would allow the cataractous lens removal surgery to be delayed and hence greatly reduce the risk of complications from early surgery. Employing a wavefront shaping technique named time-reversed ultrasonically encoded optical focusing in reflection mode, we focused 532-nm light through a highly scattering ex vivo adult human cataractous lens. This work demonstrates a potential clinical application of wavefront shaping techniques.
Visualization of polymer relaxation in viscoelastic turbulent micro-channel flow
Tai, Jiayan; Lim, Chun Ping; Lam, Yee Cheong
2015-01-01
In micro-channels, the flow of viscous liquids e.g. water, is laminar due to the low Reynolds number in miniaturized dimensions. An aqueous solution becomes viscoelastic with a minute amount of polymer additives; its flow behavior can become drastically different and turbulent. However, the molecules are typically invisible. Here we have developed a novel visualization technique to examine the extension and relaxation of polymer molecules at high flow velocities in a viscoelastic turbulent flow. Using high speed videography to observe the fluorescein labeled molecules, we show that viscoelastic turbulence is caused by the sporadic, non-uniform release of energy by the polymer molecules. This developed technique allows the examination of a viscoelastic liquid at the molecular level, and demonstrates the inhomogeneity of viscoelastic liquids as a result of molecular aggregation. It paves the way for a deeper understanding of viscoelastic turbulence, and could provide some insights on the high Weissenberg number problem. In addition, the technique may serve as a useful tool for the investigations of polymer drag reduction. PMID:26563615
Sabesan, Ramkumar; Barbot, Antoine; Yoon, Geunyoung
2017-03-01
Highly aberrated keratoconic (KC) eyes do not elicit the expected visual advantage from customized optical corrections. This is attributed to the neural insensitivity arising from chronic visual experience with poor retinal image quality, dominated by low spatial frequencies. The goal of this study was to investigate if targeted perceptual learning with adaptive optics (AO) can stimulate neural plasticity in these highly aberrated eyes. The worse eye of 2 KC subjects was trained in a contrast threshold test under AO correction. Prior to training, tumbling 'E' visual acuity and contrast sensitivity at 4, 8, 12, 16, 20, 24 and 28 c/deg were measured in both the trained and untrained eyes of each subject with their routine prescription and with AO correction for a 6mm pupil. The high spatial frequency requiring 50% contrast for detection with AO correction was picked as the training frequency. Subjects were required to train on a contrast detection test with AO correction for 1h for 5 consecutive days. During each training session, threshold contrast measurement at the training frequency with AO was conducted. Pre-training measures were repeated after the 5 training sessions in both eyes (i.e., post-training). After training, contrast sensitivity under AO correction improved on average across spatial frequency by a factor of 1.91 (range: 1.77-2.04) and 1.75 (1.22-2.34) for the two subjects. This improvement in contrast sensitivity transferred to visual acuity with the two subjects improving by 1.5 and 1.3 lines respectively with AO following training. One of the two subjects denoted an interocular transfer of training and an improvement in performance with their routine prescription post-training. This training-induced visual benefit demonstrates the potential of AO as a tool for neural rehabilitation in patients with abnormal corneas. Moreover, it reveals a sufficient degree of neural plasticity in normally developed adults who have a long history of abnormal visual experience due to optical imperfections. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Enfield, Joey; McGrath, James; Daly, Susan M.; Leahy, Martin
2016-08-01
Changes within the microcirculation can provide an early indication of the onset of a plethora of ailments. Various techniques have thus been developed that enable the study of microcirculatory irregularities. Correlation mapping optical coherence tomography (cmOCT) is a recently proposed technique, which enables mapping of vasculature networks at the capillary level in a noninvasive and noncontact manner. This technique is an extension of conventional optical coherence tomography (OCT) and is therefore likewise limited in the penetration depth of ballistic photons in biological media. Optical clearing has previously been demonstrated to enhance the penetration depth and the imaging capabilities of OCT. In order to enhance the achievable maximum imaging depth, we propose the use of optical clearing in conjunction with the cmOCT technique. We demonstrate in vivo a 13% increase in OCT penetration depth by topical application of a high-concentration fructose solution, thereby enabling the visualization of vessel features at deeper depths within the tissue.
Cortical phase changes in Alzheimer's disease at 7T MRI: a novel imaging marker.
van Rooden, Sanneke; Versluis, Maarten J; Liem, Michael K; Milles, Julien; Maier, Andrea B; Oleksik, Ania M; Webb, Andrew G; van Buchem, Mark A; van der Grond, Jeroen
2014-01-01
Postmortem studies have indicated the potential of high-field magnetic resonance imaging (MRI) to visualize amyloid depositions in the cerebral cortex. The aim of this study is to test this hypothesis in patients with Alzheimer's disease (AD). T2*-weighted MRI was performed in 16 AD patients and 15 control subjects. All magnetic resonance images were scored qualitatively by visual assessment, and quantitatively by measuring phase shifts in the cortical gray matter and hippocampus. Statistical analysis was performed to assess differences between groups. Patients with AD demonstrated an increased phase shift in the cortex in the temporoparietal, frontal, and parietal regions (P < .005), and this was associated with individual Mini-Mental State Examination scores (r = -0.54, P < .05). Increased cortical phase shift in AD patients demonstrated on 7-tesla T2*-weighted MRI is a potential new biomarker for AD, which may reflect amyloid pathology in the early stages. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Na; Xiong, Yijia; Squier, Thomas C.
2013-01-21
To optimize cellular delivery and specific labeling of tagged cytosolic proteins by biarsenical fluorescent probes build around a cyanine dye scaffold, we have systematically varied the polarity of the hydrophobic tails (i.e., 4-5 methylene groups appended by a sulfonate or methoxy ester moiety) and arsenic capping reagent (ethanedithiol versus benzenedithiol). Targeted labeling of the cytosolic proteins SlyD and the alpha subunit of RNA polymerase engineered with a tetracysteine tagging sequences demonstrate the utility of the newly synthesized probes for live-cell visualization, albeit with varying efficiencies and background intensities. Optimal routine labeling and visualization is apparent using the ethanedithiol capping reagentmore » with the uncharged methoxy ester functionalized acyl chains. These measurements demonstrate the general utility of this class of photostable and highly fluorescent biarsenical reagents based on the cyanine scaffold for in vivo targeting of tagged cellular proteins for live cell measurements of protein dynamics.« less
Can colours be used to segment words when reading?
Perea, Manuel; Tejero, Pilar; Winskel, Heather
2015-07-01
Rayner, Fischer, and Pollatsek (1998, Vision Research) demonstrated that reading unspaced text in Indo-European languages produces a substantial reading cost in word identification (as deduced from an increased word-frequency effect on target words embedded in the unspaced vs. spaced sentences) and in eye movement guidance (as deduced from landing sites closer to the beginning of the words in unspaced sentences). However, the addition of spaces between words comes with a cost: nearby words may fall outside high-acuity central vision, thus reducing the potential benefits of parafoveal processing. In the present experiment, we introduced a salient visual cue intended to facilitate the process of word segmentation without compromising visual acuity: each alternating word was printed in a different colour (i.e., ). Results only revealed a small reading cost of unspaced alternating colour sentences relative to the spaced sentences. Thus, present data are a demonstration that colour can be useful to segment words for readers of spaced orthographies. Copyright © 2015 Elsevier B.V. All rights reserved.
NGUYEN, MY HANH T.; WITKIN, ANDRE J.; REICHEL, ELIAS; KO, TONY H.; FUJIMOTO, JAMES G.; SCHUMAN, JOEL S.; DUKER, JAY S.
2007-01-01
Background Histopathological studies of acute multiple evanescent white dot syndrome (MEWDS) have not been reported because of the transient and benign nature of the disease. Ultrahigh resolution optical coherence tomography (UHR-OCT), capable of high resolution in vivo imaging, offers a unique opportunity to visualize retinal microstructure in the disease. Methods UHR-OCT images of the maculae of five patients with MEWDS were obtained and analyzed. Diagnosis was based on clinical presentation, examination, visual field testing, and angiography. Results UHR-OCT revealed disturbances in the photoreceptor inner/outer segment junction (IS/OS) in each of the five patients (six eyes) with MEWDS. In addition, thinning of the outer nuclear layer was seen in the case of recurrent MEWDS, suggesting that repeated episodes of MEWDS may result in photoreceptor atrophy. Conclusions Subtle disruptions of the photoreceptor IS/OS are demonstrated in all eyes affected by MEWDS. UHR-OCT may be a useful adjunct to diagnosis and monitoring of MEWDS. PMID:17420691
Lightening the load: perceptual load impairs visual detection in typical adults but not in autism.
Remington, Anna M; Swettenham, John G; Lavie, Nilli
2012-05-01
Autism spectrum disorder (ASD) research portrays a mixed picture of attentional abilities with demonstrations of enhancements (e.g., superior visual search) and deficits (e.g., higher distractibility). Here we test a potential resolution derived from the Load Theory of Attention (e.g., Lavie, 2005). In Load Theory, distractor processing depends on the perceptual load of the task and as such can only be eliminated under high load that engages full capacity. We hypothesize that ASD involves enhanced perceptual capacity, leading to the superior performance and increased distractor processing previously reported. Using a signal-detection paradigm, we test this directly and demonstrate that, under higher levels of load, perceptual sensitivity was reduced in typical adults but not in adults with ASD. These findings confirm our hypothesis and offer a promising solution to the previous discrepancies by suggesting that increased distractor processing in ASD results not from a filtering deficit but from enhanced perceptual capacity.
Deep data: discovery and visualization Application to hyperspectral ALMA imagery
NASA Astrophysics Data System (ADS)
Merényi, Erzsébet; Taylor, Joshua; Isella, Andrea
2017-06-01
Leading-edge telescopes such as the Atacama Large Millimeter and sub-millimeter Array (ALMA), and near-future ones, are capable of imaging the same sky area at hundreds-to-thousands of frequencies with both high spectral and spatial resolution. This provides unprecedented opportunities for discovery about the spatial, kinematical and compositional structure of sources such as molecular clouds or protoplanetary disks, and more. However, in addition to enormous volume, the data also exhibit unprecedented complexity, mandating new approaches for extracting and summarizing relevant information. Traditional techniques such as examining images at selected frequencies become intractable while tools that integrate data across frequencies or pixels (like moment maps) can no longer fully exploit and visualize the rich information. We present a neural map-based machine learning approach that can handle all spectral channels simultaneously, utilizing the full depth of these data for discovery and visualization of spectrally homogeneous spatial regions (spectral clusters) that characterize distinct kinematic behaviors. We demonstrate the effectiveness on an ALMA image cube of the protoplanetary disk HD142527. The tools we collectively name ``NeuroScope'' are efficient for ``Big Data'' due to intelligent data summarization that results in significant sparsity and noise reduction. We also demonstrate a new approach to automate our clustering for fast distillation of large data cubes.
A Polymer Visualization System with Accurate Heating and Cooling Control and High-Speed Imaging
Wong, Anson; Guo, Yanting; Park, Chul B.; Zhou, Nan Q.
2015-01-01
A visualization system to observe crystal and bubble formation in polymers under high temperature and pressure has been developed. Using this system, polymer can be subjected to a programmable thermal treatment to simulate the process in high pressure differential scanning calorimetry (HPDSC). With a high-temperature/high-pressure view-cell unit, this system enables in situ observation of crystal formation in semi-crystalline polymers to complement thermal analyses with HPDSC. The high-speed recording capability of the camera not only allows detailed recording of crystal formation, it also enables in situ capture of plastic foaming processes with a high temporal resolution. To demonstrate the system’s capability, crystal formation and foaming processes of polypropylene/carbon dioxide systems were examined. It was observed that crystals nucleated and grew into spherulites, and they grew at faster rates as temperature decreased. This observation agrees with the crystallinity measurement obtained with the HPDSC. Cell nucleation first occurred at crystals’ boundaries due to CO2 exclusion from crystal growth fronts. Subsequently, cells were nucleated around the existing ones due to tensile stresses generated in the constrained amorphous regions between networks of crystals. PMID:25915031
Visualization Tools for Teaching Computer Security
ERIC Educational Resources Information Center
Yuan, Xiaohong; Vega, Percy; Qadah, Yaseen; Archer, Ricky; Yu, Huiming; Xu, Jinsheng
2010-01-01
Using animated visualization tools has been an important teaching approach in computer science education. We have developed three visualization and animation tools that demonstrate various information security concepts and actively engage learners. The information security concepts illustrated include: packet sniffer and related computer network…
ERIC Educational Resources Information Center
Coleman, Julianne Maner; Goldston, M. Jenice
2011-01-01
When students draw observations or interpret and draw a diagram, they're communicating their understandings of science and demonstrating visual literacy abilities. Visual literacy includes skills needed to accurately interpret and produce visual and graphical information such as drawings, diagrams, tables, charts, maps, and graphs. Communication…
Ambiguous science and the visual representation of the real
NASA Astrophysics Data System (ADS)
Newbold, Curtis Robert
The emergence of visual media as prominent and even expected forms of communication in nearly all disciplines, including those scientific, has raised new questions about how the art and science of communication epistemologically affect the interpretation of scientific phenomena. In this dissertation I explore how the influence of aesthetics in visual representations of science inevitably creates ambiguous meanings. As a means to improve visual literacy in the sciences, I call awareness to the ubiquity of visual ambiguity and its importance and relevance in scientific discourse. To do this, I conduct a literature review that spans interdisciplinary research in communication, science, art, and rhetoric. Furthermore, I create a paradoxically ambiguous taxonomy, which functions to exploit the nuances of visual ambiguities and their role in scientific communication. I then extrapolate the taxonomy of visual ambiguity and from it develop an ambiguous, rhetorical heuristic, the Tetradic Model of Visual Ambiguity. The Tetradic Model is applied to a case example of a scientific image as a demonstration of how scientific communicators may increase their awareness of the epistemological effects of ambiguity in the visual representations of science. I conclude by demonstrating how scientific communicators may make productive use of visual ambiguity, even in communications of objective science, and I argue how doing so strengthens scientific communicators' visual literacy skills and their ability to communicate more ethically and effectively.
Weiner, Kevin S.; Grill-Spector, Kalanit
2011-01-01
The prevailing view of human lateral occipitotemporal cortex (LOTC) organization suggests a single area selective for images of the human body (extrastriate body area, EBA) that highly overlaps with the human motion-selective complex (hMT+). Using functional magnetic resonance imaging with higher resolution (1.5mm voxels) than past studies (3–4mm voxels), we examined the fine-scale spatial organization of these activations relative to each other, as well as to visual field maps in LOTC. Rather than one contiguous EBA highly overlapping hMT+, results indicate three limb-selective activations organized in a crescent surrounding hMT+: (1) an activation posterior to hMT+ on the lateral occipital sulcus/middle occipital gyrus (LOS/MOG) overlapping the lower vertical meridian shared between visual field maps LO-2 and TO-1, (2) an activation anterior to hMT+ on the middle temporal gyrus (MTG) consistently overlapping the lower vertical meridian of TO-2 and extending outside presently defined visual field maps, and (3) an activation inferior to hMT+ on the inferotemporal gyrus (ITG) overlapping the parafoveal representation of the TO cluster. This crescent organization of limb-selective activations surrounding hMT+ is reproducible over a span of three years and is consistent across different image types used for localization. Further, these regions exhibit differential position properties: preference for contralateral image presentation decreases and preference for foveal presentation increases from the limb-selective LOS to the MTG. Finally, the relationship between limb-selective activations and visual field maps extends to the dorsal stream where a posterior IPS activation overlaps V7. Overall, our measurements demonstrate a series of LOTC limb-selective activations that 1) have separate anatomical and functional boundaries, 2) overlap distinct visual field maps, and 3) illustrate differential position properties. These findings indicate that category selectivity alone is an insufficient organization principle for defining brain areas. Instead, multiple properties are necessary in order to parcellate and understand the functional organization of high-level visual cortex. PMID:21439386
Botly, Leigh C P; De Rosa, Eve
2012-10-01
The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.
Mollion, Hélène; Dominey, Peter Ford; Broussolle, Emmanuel; Ventre-Dominey, Jocelyne
2011-09-01
Although the treatment of Parkinson's disease via subthalamic stimulation yields remarkable improvements in motor symptoms, its effects on memory function are less clear. In this context, we previously demonstrated dissociable effects of levodopa therapy on parkinsonian performance in spatial and nonspatial visual working memory. Here we used the same protocol with an additional, purely motor task to investigate visual memory and motor performance in 2 groups of patients with Parkinson's disease with or without subthalamic stimulation. In each stimulation condition, subjects performed a simple motor task and 3 successive cognitive tasks: 1 conditional color-response association task and 2 visual (spatial and nonspatial) working memory tasks. The Parkinson's groups were compared with a control group of age-matched healthy subjects. Our principal results demonstrated that (1) in the motor task, stimulated patients were significantly improved with respect to nonstimulated patients and did not differ significantly from healthy controls, and (2) in the cognitive tasks, stimulated patients were significantly improved with respect to nonstimulated patients, but both remained significantly impaired when compared with healthy controls. These results demonstrate selective effects of subthalamic stimulation on parkinsonian disorders of motor and visual memory functions, with clear motor improvement for stimulated patients and a partial improvement for their visual memory processing. Copyright © 2011 Movement Disorder Society.
Multisensory integration across the senses in young and old adults
Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee
2011-01-01
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545
Jacob, Joseph; Bartholmai, Brian J; Brun, Anne Laure; Egashira, Ryoko; Rajagopalan, Srinivasan; Karwoski, Ronald; Kouranos, Vasileios; Kokosi, Maria; Hansell, David M; Wells, Athol U
2017-11-01
To determine whether computer-based quantification (CALIPER software) is superior to visual computed tomography (CT) scoring in the identification of CT patterns indicative of restrictive and obstructive functional indices in hypersensitivity pneumonitis (HP). A total of 135 consecutive HP patients had CT parenchymal patterns evaluated quantitatively by both visual scoring and CALIPER. Results were evaluated against: forced vital capacity (FVC), total lung capacity (TLC), diffusing capacity for carbon monoxide (DL CO ) and a composite physiological index (CPI) to identify which CT scoring method better correlated with functional indices. CALIPER-derived scores of total interstitial lung disease extent correlated more strongly than visual scores: FVC (CALIPER R = 0.73, visual R = 0.51); DL CO (CALIPER R = 0.61, visual R = 0.48); and CPI (CALIPER R = 0·70, visual R = 0·55). The CT variable that correlated most strongly with restrictive functional indices was CALIPER pulmonary vessel volume (PVV): FVC R = 0.75, DL CO R = 0.68 and CPI R = 0.76. Ground-glass opacity quantified by CALIPER alone demonstrated strong associations with restrictive functional indices: CALIPER FVC R = 0.65; DL CO R = 0.59; CPI R = 0.64; and visual = not significant. Decreased attenuation lung quantified by CALIPER was a better morphological measure of obstructive lung disease than equivalent visual scores as judged by relationships with TLC (CALIPER R = 0.63 and visual R = 0.12). All results were maintained on multivariate analysis. CALIPER improved on visual scoring in HP as judged by restrictive and obstructive functional correlations. Decreased attenuation regions of the lung quantified by CALIPER demonstrated better linkages to obstructive lung physiology than visually quantified CT scores. A novel CALIPER variable, the PVV, demonstrated the strongest linkages with restrictive functional indices and could represent a new automated index of disease severity in HP. © 2017 Asian Pacific Society of Respirology.
NASA Astrophysics Data System (ADS)
Qi, K.; Qingfeng, G.
2017-12-01
With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.
Attentional blink in young people with high-functioning autism and Asperger's disorder.
Rinehart, Nicole; Tonge, Bruce; Brereton, Avril; Bradshaw, John
2010-01-01
The aim of the study was to examine the temporal characteristics of information processing in individuals with high-functioning autism and Asperger's disorder using a rapid serial visual presentation paradigm. The results clearly showed that such people demonstrate an attentional blink of similar magnitude to comparison groups. This supports the proposition that the social processing difficulties experienced by these individuals are not underpinned by a basic temporal-cognitive processing deficit, which is consistent with Minshew's complex information processing theory. This is the second study to show that automatic inhibitory processes are intact in both autism and Asperger's disorder, which appears to distinguish these disorders from some other frontostriatal disorders. The finding that individuals with autism were generally poorer than the comparison group at detecting black Xs, while being as good in responding to white letters, was accounted for in the context of a potential dual-task processing difficulty or visual search superiority.
Wake visualization of a heaving and pitching foil in a soap film
NASA Astrophysics Data System (ADS)
Muijres, Florian T.; Lentink, David
2007-11-01
Many fish depend primarily on their tail beat for propulsion. Such a tail is commonly modeled as a two-dimensional flapping foil. Here we demonstrate a novel experimental setup of such a foil that heaves and pitches in a soap film. The vortical flow field generated by the foil correlates with thickness variations in the soap film, which appear as interference fringes when the film is illuminated with a monochromatic light source (we used a high-frequency SOX lamp). These interference fringes are subsequently captured with high-speed video (500 Hz) and this allows us to study the unsteady vortical field of a flapping foil. The main advantage of our approach is that the flow fields are time and space resolved and can be obtained time-efficiently. The foil is driven by a flapping mechanism that is optimized for studying both fish swimming and insect flight inside and outside the behavioral envelope. The mechanism generates sinusoidal heave and pitch kinematics, pre-described by the non-dimensional heave amplitude (0-6), the pitch amplitude (0°-90°), the phase difference between pitch and heave (0°-360°), and the dimensionless wavelength of the foil (3-18). We obtained this wide range of wavelengths for a foil 4 mm long by minimizing the soap film speed (0.25 m s-1) and maximizing the flapping frequency range (4-25 Hz). The Reynolds number of the foil is of order 1,000 throughout this range. The resulting setup enables an effective assessment of vortex wake topology as a function of flapping kinematics. The efficiency of the method is further improved by carefully eliminating background noise in the visualization (e.g., reflections of the mechanism). This is done by placing mirrors at an angle behind the translucent film such that the camera views the much more distant and out-of-focus reflections of the black laboratory wall. The resulting high-quality flow visualizations require minimal image processing for flow interpretation. Finally, we demonstrate the effectiveness of our setup by visualizing the vortex dynamics of the flapping foil as a function of pitch amplitude by assessing the symmetry of the vortical wake.
Wake visualization of a heaving and pitching foil in a soap film
NASA Astrophysics Data System (ADS)
Muijres, Florian T.; Lentink, David
Many fish depend primarily on their tail beat for propulsion. Such a tail is commonly modeled as a twodimensional flapping foil. Here we demonstrate a novel experimental setup of such a foil that heaves and pitches in a soap film. The vortical flow field generated by the foil correlates with thickness variations in the soap film, which appear as interference fringes when the film is illuminated with a monochromatic light source (we used a high-frequency SOX lamp). These interference fringes are subsequently captured with high-speed video (500 Hz) and this allows us to study the unsteady vortical field of a flapping foil. The main advantage of our approach is that the flow fields are time and space resolved and can be obtained time-efficiently. The foil is driven by a flapping mechanism that is optimized for studying both fish swimming and insect flight inside and outside the behavioral envelope. The mechanism generates sinusoidal heave and pitch kinematics, pre-described by the non-dimensional heave amplitude (0-6), the pitch amplitude (0° - 90°), the phase difference between pitch and heave (0° - 360°), and the dimensionless wavelength of the foil (3-18). We obtained this wide range of wavelengths for a foil 4 mm long by minimizing the soap film speed (0.25 m s- 1) and maximizing the flapping frequency range (4-25 Hz). The Reynolds number of the foil is of order 1,000 throughout this range. The resulting setup enables an effective assessment of vortex wake topology as a function of flapping kinematics. The efficiency of the method is further improved by carefully eliminating background noise in the visualization (e.g., reflections of the mechanism). This is done by placing mirrors at an angle behind the translucent film such that the camera views the much more distant and out-of-focus reflections of the black laboratory wall. The resulting high-quality flow visualizations require minimal image processing for flow interpretation. Finally, we demonstrate the effectiveness of our setup by visualizing the vortex dynamics of the flapping foil as a function of pitch amplitude by assessing the symmetry of the vortical wake.
Automated retinofugal visual pathway reconstruction with multi-shell HARDI and FOD-based analysis.
Kammen, Alexandra; Law, Meng; Tjan, Bosco S; Toga, Arthur W; Shi, Yonggang
2016-01-15
Diffusion MRI tractography provides a non-invasive modality to examine the human retinofugal projection, which consists of the optic nerves, optic chiasm, optic tracts, the lateral geniculate nuclei (LGN) and the optic radiations. However, the pathway has several anatomic features that make it particularly challenging to study with tractography, including its location near blood vessels and bone-air interface at the base of the cerebrum, crossing fibers at the chiasm, somewhat-tortuous course around the temporal horn via Meyer's Loop, and multiple closely neighboring fiber bundles. To date, these unique complexities of the visual pathway have impeded the development of a robust and automated reconstruction method using tractography. To overcome these challenges, we develop a novel, fully automated system to reconstruct the retinofugal visual pathway from high-resolution diffusion imaging data. Using multi-shell, high angular resolution diffusion imaging (HARDI) data, we reconstruct precise fiber orientation distributions (FODs) with high order spherical harmonics (SPHARM) to resolve fiber crossings, which allows the tractography algorithm to successfully navigate the complicated anatomy surrounding the retinofugal pathway. We also develop automated algorithms for the identification of ROIs used for fiber bundle reconstruction. In particular, we develop a novel approach to extract the LGN region of interest (ROI) based on intrinsic shape analysis of a fiber bundle computed from a seed region at the optic chiasm to a target at the primary visual cortex. By combining automatically identified ROIs and FOD-based tractography, we obtain a fully automated system to compute the main components of the retinofugal pathway, including the optic tract and the optic radiation. We apply our method to the multi-shell HARDI data of 215 subjects from the Human Connectome Project (HCP). Through comparisons with post-mortem dissection measurements, we demonstrate the retinotopic organization of the optic radiation including a successful reconstruction of Meyer's loop. Then, using the reconstructed optic radiation bundle from the HCP cohort, we construct a probabilistic atlas and demonstrate its consistency with a post-mortem atlas. Finally, we generate a shape-based representation of the optic radiation for morphometry analysis. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Hof, P. R.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)
1995-01-01
Visual function in monkeys is subserved at the cortical level by a large number of areas defined by their specific physiological properties and connectivity patterns. For most of these cortical fields, a precise index of their degree of anatomical specialization has not yet been defined, although many regional patterns have been described using Nissl or myelin stains. In the present study, an attempt has been made to elucidate the regional characteristics, and to varying degrees boundaries, of several visual cortical areas in the macaque monkey using an antibody to neurofilament protein (SMI32). This antibody labels a subset of pyramidal neurons with highly specific regional and laminar distribution patterns in the cerebral cortex. Based on the staining patterns and regional quantitative analysis, as many as 28 cortical fields were reliably identified. Each field had a homogeneous distribution of labeled neurons, except area V1, where increases in layer IVB cell and in Meynert cell counts paralleled the increase in the degree of eccentricity in the visual field representation. Within the occipitotemporal pathway, areas V3 and V4 and fields in the inferior temporal cortex were characterized by a distinct population of neurofilament-rich neurons in layers II-IIIa, whereas areas located in the parietal cortex and part of the occipitoparietal pathway had a consistent population of large labeled neurons in layer Va. The mediotemporal areas MT and MST displayed a distinct population of densely labeled neurons in layer VI. Quantitative analysis of the laminar distribution of the labeled neurons demonstrated that the visual cortical areas could be grouped in four hierarchical levels based on the ratio of neuron counts between infragranular and supragranular layers, with the first (areas V1, V2, V3, and V3A) and third (temporal and parietal regions) levels characterized by low ratios and the second (areas MT, MST, and V4) and fourth (frontal regions) levels characterized by high to very high ratios. Such density trends may correspond to differential representation of corticocortically (and corticosubcortically) projecting neurons at several functional steps in the integration of the visual stimuli. In this context, it is possible that neurofilament protein is crucial for the unique capacity of certain subsets of neurons to perform the highly precise mapping functions of the monkey visual system.
Hummingbirds control hovering flight by stabilizing visual motion.
Goller, Benjamin; Altshuler, Douglas L
2014-12-23
Relatively little is known about how sensory information is used for controlling flight in birds. A powerful method is to immerse an animal in a dynamic virtual reality environment to examine behavioral responses. Here, we investigated the role of vision during free-flight hovering in hummingbirds to determine how optic flow--image movement across the retina--is used to control body position. We filmed hummingbirds hovering in front of a projection screen with the prediction that projecting moving patterns would disrupt hovering stability but stationary patterns would allow the hummingbird to stabilize position. When hovering in the presence of moving gratings and spirals, hummingbirds lost positional stability and responded to the specific orientation of the moving visual stimulus. There was no loss of stability with stationary versions of the same stimulus patterns. When exposed to a single stimulus many times or to a weakened stimulus that combined a moving spiral with a stationary checkerboard, the response to looming motion declined. However, even minimal visual motion was sufficient to cause a loss of positional stability despite prominent stationary features. Collectively, these experiments demonstrate that hummingbirds control hovering position by stabilizing motions in their visual field. The high sensitivity and persistence of this disruptive response is surprising, given that the hummingbird brain is highly specialized for sensory processing and spatial mapping, providing other potential mechanisms for controlling position.
Zariwala, Hatim A.; Madisen, Linda; Ahrens, Kurt F.; Bernard, Amy; Lein, Edward S.; Jones, Allan R.; Zeng, Hongkui
2011-01-01
The putative excitatory and inhibitory cell classes within the mouse primary visual cortex V1 have different functional properties as studied using recording microelectrode. Excitatory neurons show high selectivity for the orientation angle of moving gratings while the putative inhibitory neurons show poor selectivity. However, the study of selectivity of the genetically identified interneurons and their subtypes remain controversial. Here we use novel Cre-driver and reporter mice to identify genetic subpopulations in vivo for two-photon calcium dye imaging: Wfs1(+)/Gad1(−) mice that labels layer 2/3 excitatory cell population and Pvalb(+)/Gad1(+) mice that labels a genetic subpopulation of inhibitory neurons. The cells in both mice were identically labeled with a tdTomato protein, visible in vivo, using a Cre-reporter line. We found that the Wfs1(+) cells exhibited visual tuning properties comparable to the excitatory population, i.e., high selectivity and tuning to the angle, direction, and spatial frequency of oriented moving gratings. The functional tuning of Pvalb(+) neurons was consistent with previously reported narrow-spiking interneurons in microelectrode studies, exhibiting poorer selectivity than the excitatory neurons. This study demonstrates the utility of Cre-transgenic mouse technology in selective targeting of subpopulations of neurons and makes them amenable to structural, functional, and connectivity studies. PMID:21283555
Padovano, Ilaria; Pazzola, Giulia; Pipitone, Nicolò; Cimino, Luca; Salvarani, Carlo
2014-01-01
We report a 62-year-old man with mild fever, headache and acute visual loss in his right eye due to anterior ischaemic optic neuropathy (AION), followed a few days later by pain in the legs and left arm associated with numbness and weakness. Giant cell arteritis complicated by AION was suspected at the beginning and high-dose oral glucocorticoids were started. However, on the basis of the past medical history of nasal polyposis, asthma, and hypereosynophilia as well as of further investigations (biopsy of the nasal mucosa showing granulomatous inflammation with a rich eosinophilic infiltrate, electromyography demonstrating, mononeuritis multiplex and positive p-ANCA), eosinophilic granulomatosis with polyangiitis (EGPA), previously known as Churg-Strauss syndrome, was diagnosed. Because visual acuity in the right eye deteriorated despite glucocorticoid therapy, pulse intravenous cyclophosphamide was started, subsequently replaced by oral azathioprine, while prednisone was slowly tapered. This treatment led to gradual improvement of the neurological symptoms, whereas the right visual impairment remained unchanged. EGPA-related AION is an uncommon lesion that is probably due to vasculitic involvement of posterior ciliary and/or chorioretinal arteries. The prognosis of established AION is poor for the affected eye, even when glucocorticoid treatment is started immediately. However, early recognition of AION and prompt aggressive treatment with high-dose glucocorticoids plus cyclophosphamide can prevent visual loss in the unaffected eye.
Reilly, C D; Panday, V; Lazos, V; Mittelstaedt, B R
2010-01-01
The field of refractive surgery continues to evolve amid continued concerns as to which surgical technique minimizes the risk of inducing ectasia. To compare clinical outcomes between PRK, LASEK and Epi-LASIK in moderately to highly myopic eyes (-4.00 D to -8.00 D). A retrospective chart review of 100 PRK eyes, 100 LASEK eyes (with alcohol) and 97 Epi-LASIK eyes was performed. Post-operative pain, uncorrected visual acuity, and corneal haze data was recorded and analyzed at post-op days 1, 4 and 7 and at post-op months 1, 3, 6 and 12. In all groups surgical corrections ranged from -4.00 D to -8.00 D. There was less pain associated with the epi-LASIK procedure especially early (post-op days 1 and 4). Visual recovery was superior within the PRK group during the first post-operative week but by post-op week 4 all three were equal. Haze scores were similar but a trend for less haze was demonstrated with epi-LASIK at 6 and 12 months. Epi-LASIK has a slight advantage over PRK and LASEK early on in the post-op course with regards to pain. Visual recovery is similar by 4 weeks and is better with PRK early. In addition, epi-LASIK trends toward less significant haze. © Nepal Ophthalmic Society.
Altered Functional Connectivity of the Primary Visual Cortex in Subjects with Amblyopia
Ding, Kun; Liu, Yong; Yan, Xiaohe; Lin, Xiaoming; Jiang, Tianzi
2013-01-01
Amblyopia, which usually occurs during early childhood and results in poor or blurred vision, is a disorder of the visual system that is characterized by a deficiency in an otherwise physically normal eye or by a deficiency that is out of proportion with the structural or functional abnormalities of the eye. Our previous study demonstrated alterations in the spontaneous activity patterns of some brain regions in individuals with anisometropic amblyopia compared to subjects with normal vision. To date, it remains unknown whether patients with amblyopia show characteristic alterations in the functional connectivity patterns in the visual areas of the brain, particularly the primary visual area. In the present study, we investigated the differences in the functional connectivity of the primary visual area between individuals with amblyopia and normal-sighted subjects using resting functional magnetic resonance imaging. Our findings demonstrated that the cerebellum and the inferior parietal lobule showed altered functional connectivity with the primary visual area in individuals with amblyopia, and this finding provides further evidence for the disruption of the dorsal visual pathway in amblyopic subjects. PMID:23844297
Visual Search Elicits the Electrophysiological Marker of Visual Working Memory
Emrich, Stephen M.; Al-Aidroos, Naseem; Pratt, Jay; Ferber, Susanne
2009-01-01
Background Although limited in capacity, visual working memory (VWM) plays an important role in many aspects of visually-guided behavior. Recent experiments have demonstrated an electrophysiological marker of VWM encoding and maintenance, the contralateral delay activity (CDA), which has been shown in multiple tasks that have both explicit and implicit memory demands. Here, we investigate whether the CDA is evident during visual search, a thoroughly-researched task that is a hallmark of visual attention but has no explicit memory requirements. Methodology/Principal Findings The results demonstrate that the CDA is present during a lateralized search task, and that it is similar in amplitude to the CDA observed in a change-detection task, but peaks slightly later. The changes in CDA amplitude during search were strongly correlated with VWM capacity, as well as with search efficiency. These results were paralleled by behavioral findings showing a strong correlation between VWM capacity and search efficiency. Conclusions/Significance We conclude that the activity observed during visual search was generated by the same neural resources that subserve VWM, and that this activity reflects the maintenance of previously searched distractors. PMID:19956663
Preclinical fluorescent mouse models of pancreatic cancer
NASA Astrophysics Data System (ADS)
Bouvet, Michael; Hoffman, Robert M.
2007-02-01
Here we describe our cumulative experience with the development and preclinical application of several highly fluorescent, clinically-relevant, metastatic orthotopic mouse models of pancreatic cancer. These models utilize the human pancreatic cancer cell lines which have been genetically engineered to selectively express high levels of the bioluminescent green fluorescent (GFP) or red fluorescent protein (RFP). Fluorescent tumors are established subcutaneously in nude mice, and tumor fragments are then surgically transplanted onto the pancreas. Locoregional tumor growth and distant metastasis of these orthotopic implants occurs spontaneously and rapidly throughout the abdomen in a manner consistent with clinical human disease. Highly specific, high-resolution, real-time visualization of tumor growth and metastasis may be achieved in vivo without the need for contrast agents, invasive techniques, or expensive imaging equipment. We have shown a high correlation between florescent optical imaging and magnetic resonance imaging in these models. Alternatively, transplantation of RFP-expressing tumor fragments onto the pancreas of GFP-expressing transgenic mice may be used to facilitate visualization of tumor-host interaction between the pancreatic tumor fragments and host-derived stroma and vasculature. Such in vivo models have enabled us to serially visualize and acquire images of the progression of pancreatic cancer in the live animal, and to demonstrate the real-time antitumor and antimetastatic effects of several novel therapeutic strategies on pancreatic malignancy. These fluorescent models are therefore powerful and reliable tools with which to investigate human pancreatic cancer and therapeutic strategies directed against it.
Overview of Human-Centric Space Situational Awareness Science and Technology
2012-09-01
AGI), the developers of Satellite Tool Kit ( STK ), has provided demonstrations of innovative SSA visualization concepts that take advantage of the...needs inherent with SSA. RH has conducted CTAs and developed work-centered human-computer interfaces, visualizations , and collaboration technologies...all end users. RH’s Battlespace Visualization Branch researches methods to exploit the visual channel primarily to improve decision making and
"The Mask Who Wasn't There": Visual Masking Effect with the Perceptual Absence of the Mask
ERIC Educational Resources Information Center
Rey, Amandine Eve; Riou, Benoit; Muller, Dominique; Dabic, Stéphanie; Versace, Rémy
2015-01-01
Does a visual mask need to be perceptually present to disrupt processing? In the present research, we proposed to explore the link between perceptual and memory mechanisms by demonstrating that a typical sensory phenomenon (visual masking) can be replicated at a memory level. Experiment 1 highlighted an interference effect of a visual mask on the…
ERIC Educational Resources Information Center
Wang, Tsui-Ying; Huang, Ho-Chuan; Huang, Hsiu-Shuang
2006-01-01
We propose a computer-assisted cancellation test system (CACTS) to understand the visual attention performance and visual search strategies in school children. The main aim of this paper is to present our design and development of the CACTS and demonstrate some ways in which computer techniques can allow the educator not only to obtain more…
Cross-Modal Retrieval With CNN Visual Features: A New Baseline.
Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng
2017-02-01
Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.
The visual and functional impacts of astigmatism and its clinical management.
Read, Scott A; Vincent, Stephen J; Collins, Michael J
2014-05-01
To provide a comprehensive overview of research examining the impact of astigmatism on clinical and functional measures of vision, the short and longer term adaptations to astigmatism that occur in the visual system, and the currently available clinical options for the management of patients with astigmatism. The presence of astigmatism can lead to substantial reductions in visual performance in a variety of clinical vision measures and functional visual tasks. Recent evidence demonstrates that astigmatic blur results in short-term adaptations in the visual system that appear to reduce the perceived impact of astigmatism on vision. In the longer term, uncorrected astigmatism in childhood can also significantly impact on visual development, resulting in amblyopia. Astigmatism is also associated with the development of spherical refractive errors. Although the clinical correction of small magnitudes of astigmatism is relatively straightforward, the precise, reliable correction of astigmatism (particularly high astigmatism) can be challenging. A wide variety of refractive corrections are now available for the patient with astigmatism, including spectacle, contact lens and surgical options. Astigmatism is one of the most common refractive errors managed in clinical ophthalmic practice. The significant visual and functional impacts of astigmatism emphasise the importance of its reliable clinical management. With continued improvements in ocular measurement techniques and developments in a range of different refractive correction technologies, the future promises the potential for more precise and comprehensive correction options for astigmatic patients. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.
Starosolski, Zbigniew; Villamizar, Carlos A.; Rendon, David; Paldino, Michael J.; Milewicz, Dianna M.; Ghaghada, Ketan B.; Annapragada, Ananth V.
2015-01-01
Abnormalities in the cerebrovascular system play a central role in many neurologic diseases. The on-going expansion of rodent models of human cerebrovascular diseases and the need to use these models to understand disease progression and treatment has amplified the need for reproducible non-invasive imaging methods for high-resolution visualization of the complete cerebral vasculature. In this study, we present methods for in vivo high-resolution (19 μm isotropic) computed tomography imaging of complete mouse brain vasculature. This technique enabled 3D visualization of large cerebrovascular networks, including the Circle of Willis. Blood vessels as small as 40 μm were clearly delineated. ACTA2 mutations in humans cause cerebrovascular defects, including abnormally straightened arteries and a moyamoya-like arteriopathy characterized by bilateral narrowing of the internal carotid artery and stenosis of many large arteries. In vivo imaging studies performed in a mouse model of Acta2 mutations demonstrated the utility of this method for studying vascular morphometric changes that are practically impossible to identify using current histological methods. Specifically, the technique demonstrated changes in the width of the Circle of Willis, straightening of cerebral arteries and arterial stenoses. We believe the use of imaging methods described here will contribute substantially to the study of rodent cerebrovasculature. PMID:25985192
Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J
2015-07-01
Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.
Dusek, Wolfgang; Pierscionek, Barbara K; McClelland, Julie F
2010-05-25
To describe and compare visual function measures of two groups of school age children (6-14 years of age) attending a specialist eyecare practice in Austria; one group referred to the practice from educational assessment centres diagnosed with reading and writing difficulties and the other, a clinical age-matched control group. Retrospective clinical data from one group of subjects with reading difficulties (n = 825) and a clinical control group of subjects (n = 328) were examined.Statistical analysis was performed to determine whether any differences existed between visual function measures from each group (refractive error, visual acuity, binocular status, accommodative function and reading speed and accuracy). Statistical analysis using one way ANOVA demonstrated no differences between the two groups in terms of refractive error and the size or direction of heterophoria at distance (p > 0.05). Using predominately one way ANOVA and chi-square analyses, those subjects in the referred group were statistically more likely to have poorer distance visual acuity, an exophoric deviation at near, a lower amplitude of accommodation, reduced accommodative facility, reduced vergence facility, a reduced near point of convergence, a lower AC/A ratio and a slower reading speed than those in the clinical control group (p < 0.05). This study highlights the high proportions of visual function anomalies in a group of children with reading difficulties in an Austrian population. It confirms the importance of a full assessment of binocular visual status in order to detect and remedy these deficits in order to prevent the visual problems continuing to impact upon educational development.
Little, Anthony C; DeBruine, Lisa M; Jones, Benedict C
2011-07-07
Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident.
Little, Anthony C.; DeBruine, Lisa M.; Jones, Benedict C.
2011-01-01
Evolutionary approaches to human attractiveness have documented several traits that are proposed to be attractive across individuals and cultures, although both cross-individual and cross-cultural variations are also often found. Previous studies show that parasite prevalence and mortality/health are related to cultural variation in preferences for attractive traits. Visual experience of pathogen cues may mediate such variable preferences. Here we showed individuals slideshows of images with cues to low and high pathogen prevalence and measured their visual preferences for face traits. We found that both men and women moderated their preferences for facial masculinity and symmetry according to recent experience of visual cues to environmental pathogens. Change in preferences was seen mainly for opposite-sex faces, with women preferring more masculine and more symmetric male faces and men preferring more feminine and more symmetric female faces after exposure to pathogen cues than when not exposed to such cues. Cues to environmental pathogens had no significant effects on preferences for same-sex faces. These data complement studies of cross-cultural differences in preferences by suggesting a mechanism for variation in mate preferences. Similar visual experience could lead to within-cultural agreement and differing visual experience could lead to cross-cultural variation. Overall, our data demonstrate that preferences can be strategically flexible according to recent visual experience with pathogen cues. Given that cues to pathogens may signal an increase in contagion/mortality risk, it may be adaptive to shift visual preferences in favour of proposed good-gene markers in environments where such cues are more evident. PMID:21123269
Patla, Aftab E; Greig, Michael
In the two experiments discussed in this paper we quantified obstacle avoidance performance characteristics carried out open loop (without vision) but with different initial visual sampling conditions and compared it to the full vision condition. The initial visual sampling conditions included: static vision (SV), vision during forward walking for three steps and stopping (FW), vision during forward walking for three steps and not stopping (FW-NS), and vision during backward walking for three steps and stopping (BW). In experiment 1, we compared performance during SV, FW and BW with full vision condition, while in the second experiment we compared performance during FW and FW-NS conditions. The questions we wanted to address are: Is ecologically valid dynamic visual sampling of the environment superior to static visual sampling for open loop obstacle avoidance task? What are the reasons for failure in performing open loop obstacle avoidance task? The results showed that irrespective of the initial visual sampling condition when open loop control is initiated from a standing posture, the success rate was only approximately 50%. The main reason for the high failure rates was not inappropriate limb elevation, but incorrect foot placement before the obstacle. The second experiment showed that it is not the nature of visual sampling per se that influences success rate, but the fact that the open loop obstacle avoidance task is initiated from a standing posture. The results of these two experiments clearly demonstrate the importance of on-line visual information for adaptive human locomotion.
2011-01-01
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper. PMID:21410968
Kamel Boulos, Maged N; Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-03-16
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper.