Bakken, Trygve E; Roddey, J Cooper; Djurovic, Srdjan; Akshoomoff, Natacha; Amaral, David G; Bloss, Cinnamon S; Casey, B J; Chang, Linda; Ernst, Thomas M; Gruen, Jeffrey R; Jernigan, Terry L; Kaufmann, Walter E; Kenet, Tal; Kennedy, David N; Kuperman, Joshua M; Murray, Sarah S; Sowell, Elizabeth R; Rimol, Lars M; Mattingsdal, Morten; Melle, Ingrid; Agartz, Ingrid; Andreassen, Ole A; Schork, Nicholas J; Dale, Anders M; Weiner, Michael; Aisen, Paul; Petersen, Ronald; Jack, Clifford R; Jagust, William; Trojanowki, John Q; Toga, Arthur W; Beckett, Laurel; Green, Robert C; Saykin, Andrew J; Morris, John; Liu, Enchi; Montine, Tom; Gamst, Anthony; Thomas, Ronald G; Donohue, Michael; Walter, Sarah; Gessert, Devon; Sather, Tamie; Harvey, Danielle; Kornak, John; Dale, Anders; Bernstein, Matthew; Felmlee, Joel; Fox, Nick; Thompson, Paul; Schuff, Norbert; Alexander, Gene; DeCarli, Charles; Bandy, Dan; Koeppe, Robert A; Foster, Norm; Reiman, Eric M; Chen, Kewei; Mathis, Chet; Cairns, Nigel J; Taylor-Reinwald, Lisa; Trojanowki, J Q; Shaw, Les; Lee, Virginia M Y; Korecka, Magdalena; Crawford, Karen; Neu, Scott; Foroud, Tatiana M; Potkin, Steven; Shen, Li; Kachaturian, Zaven; Frank, Richard; Snyder, Peter J; Molchan, Susan; Kaye, Jeffrey; Quinn, Joseph; Lind, Betty; Dolen, Sara; Schneider, Lon S; Pawluczyk, Sonia; Spann, Bryan M; Brewer, James; Vanderswag, Helen; Heidebrink, Judith L; Lord, Joanne L; Johnson, Kris; Doody, Rachelle S; Villanueva-Meyer, Javier; Chowdhury, Munir; Stern, Yaakov; Honig, Lawrence S; Bell, Karen L; Morris, John C; Ances, Beau; Carroll, Maria; Leon, Sue; Mintun, Mark A; Schneider, Stacy; Marson, Daniel; Griffith, Randall; Clark, David; Grossman, Hillel; Mitsis, Effie; Romirowsky, Aliza; deToledo-Morrell, Leyla; Shah, Raj C; Duara, Ranjan; Varon, Daniel; Roberts, Peggy; Albert, Marilyn; Onyike, Chiadi; Kielb, Stephanie; Rusinek, Henry; de Leon, Mony J; Glodzik, Lidia; De Santi, Susan; Doraiswamy, P Murali; Petrella, Jeffrey R; Coleman, R Edward; Arnold, Steven E; Karlawish, Jason H; Wolk, David; Smith, Charles D; Jicha, Greg; Hardy, Peter; Lopez, Oscar L; Oakley, MaryAnn; Simpson, Donna M; Porsteinsson, Anton P; Goldstein, Bonnie S; Martin, Kim; Makino, Kelly M; Ismail, M Saleem; Brand, Connie; Mulnard, Ruth A; Thai, Gaby; Mc-Adams-Ortiz, Catherine; Womack, Kyle; Mathews, Dana; Quiceno, Mary; Diaz-Arrastia, Ramon; King, Richard; Weiner, Myron; Martin-Cook, Kristen; DeVous, Michael; Levey, Allan I; Lah, James J; Cellar, Janet S; Burns, Jeffrey M; Anderson, Heather S; Swerdlow, Russell H; Apostolova, Liana; Lu, Po H; Bartzokis, George; Silverman, Daniel H S; Graff-Radford, Neill R; Parfitt, Francine; Johnson, Heather; Farlow, Martin R; Hake, Ann Marie; Matthews, Brandy R; Herring, Scott; van Dyck, Christopher H; Carson, Richard E; MacAvoy, Martha G; Chertkow, Howard; Bergman, Howard; Hosein, Chris; Black, Sandra; Stefanovic, Bojana; Caldwell, Curtis; Ging-Yuek; Hsiung, Robin; Feldman, Howard; Mudge, Benita; Assaly, Michele; Kertesz, Andrew; Rogers, John; Trost, Dick; Bernick, Charles; Munic, Donna; Kerwin, Diana; Mesulam, Marek-Marsel; Lipowski, Kristina; Wu, Chuang-Kuo; Johnson, Nancy; Sadowsky, Carl; Martinez, Walter; Villena, Teresa; Turner, Raymond Scott; Johnson, Kathleen; Reynolds, Brigid; Sperling, Reisa A; Johnson, Keith A; Marshall, Gad; Frey, Meghan; Yesavage, Jerome; Taylor, Joy L; Lane, Barton; Rosen, Allyson; Tinklenberg, Jared; Sabbagh, Marwan; Belden, Christine; Jacobson, Sandra; Kowall, Neil; Killiany, Ronald; Budson, Andrew E; Norbash, Alexander; Johnson, Patricia Lynn; Obisesan, Thomas O; Wolday, Saba; Bwayo, Salome K; Lerner, Alan; Hudson, Leon; Ogrocki, Paula; Fletcher, Evan; Carmichael, Owen; Olichney, John; Kittur, Smita; Borrie, Michael; Lee, T-Y; Bartha, Rob; Johnson, Sterling; Asthana, Sanjay; Carlsson, Cynthia M; Potkin, Steven G; Preda, Adrian; Nguyen, Dana; Tariot, Pierre; Fleisher, Adam; Reeder, Stephanie; Bates, Vernice; Capote, Horacio; Rainka, Michelle; Scharre, Douglas W; Kataki, Maria; Zimmerman, Earl A; Celmins, Dzintra; Brown, Alice D; Pearlson, Godfrey D; Blank, Karen; Anderson, Karen; Santulli, Robert B; Schwartz, Eben S; Sink, Kaycee M; Williamson, Jeff D; Garg, Pradeep; Watkins, Franklin; Ott, Brian R; Querfurth, Henry; Tremont, Geoffrey; Salloway, Stephen; Malloy, Paul; Correia, Stephen; Rosen, Howard J; Miller, Bruce L; Mintzer, Jacobo; Longmire, Crystal Flynn; Spicer, Kenneth; Finger, Elizabether; Rachinsky, Irina; Drost, Dick; Jernigan, Terry; McCabe, Connor; Grant, Ellen; Ernst, Thomas; Kuperman, Josh; Chung, Yoon; Murray, Sarah; Bloss, Cinnamon; Darst, Burcu; Pritchett, Lexi; Saito, Ashley; Amaral, David; DiNino, Mishaela; Eyngorina, Bella; Sowell, Elizabeth; Houston, Suzanne; Soderberg, Lindsay; Kaufmann, Walter; van Zijl, Peter; Rizzo-Busack, Hilda; Javid, Mohsin; Mehta, Natasha; Ruberry, Erika; Powers, Alisa; Rosen, Bruce; Gebhard, Nitzah; Manigan, Holly; Frazier, Jean; Kennedy, David; Yakutis, Lauren; Hill, Michael; Gruen, Jeffrey; Bosson-Heenan, Joan; Carlson, Heatherly
2012-03-06
Visual cortical surface area varies two- to threefold between human individuals, is highly heritable, and has been correlated with visual acuity and visual perception. However, it is still largely unknown what specific genetic and environmental factors contribute to normal variation in the area of visual cortex. To identify SNPs associated with the proportional surface area of visual cortex, we performed a genome-wide association study followed by replication in two independent cohorts. We identified one SNP (rs6116869) that replicated in both cohorts and had genome-wide significant association (P(combined) = 3.2 × 10(-8)). Furthermore, a metaanalysis of imputed SNPs in this genomic region identified a more significantly associated SNP (rs238295; P = 6.5 × 10(-9)) that was in strong linkage disequilibrium with rs6116869. These SNPs are located within 4 kb of the 5' UTR of GPCPD1, glycerophosphocholine phosphodiesterase GDE1 homolog (Saccharomyces cerevisiae), which in humans, is more highly expressed in occipital cortex compared with the remainder of cortex than 99.9% of genes genome-wide. Based on these findings, we conclude that this common genetic variation contributes to the proportional area of human visual cortex. We suggest that identifying genes that contribute to normal cortical architecture provides a first step to understanding genetic mechanisms that underlie visual perception.
Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I
2017-06-01
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.
Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.
2017-01-01
Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201
Neural Mechanisms of Selective Visual Attention.
Moore, Tirin; Zirnsak, Marc
2017-01-03
Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.
1996-04-01
This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
The Anatomical and Functional Organization of the Human Visual Pulvinar
Pinsk, Mark A.; Kastner, Sabine
2015-01-01
The pulvinar is the largest nucleus in the primate thalamus and contains extensive, reciprocal connections with visual cortex. Although the anatomical and functional organization of the pulvinar has been extensively studied in old and new world monkeys, little is known about the organization of the human pulvinar. Using high-resolution functional magnetic resonance imaging at 3 T, we identified two visual field maps within the ventral pulvinar, referred to as vPul1 and vPul2. Both maps contain an inversion of contralateral visual space with the upper visual field represented ventrally and the lower visual field represented dorsally. vPul1 and vPul2 border each other at the vertical meridian and share a representation of foveal space with iso-eccentricity lines extending across areal borders. Additional, coarse representations of contralateral visual space were identified within ventral medial and dorsal lateral portions of the pulvinar. Connectivity analyses on functional and diffusion imaging data revealed a strong distinction in thalamocortical connectivity between the dorsal and ventral pulvinar. The two maps in the ventral pulvinar were most strongly connected with early and extrastriate visual areas. Given the shared eccentricity representation and similarity in cortical connectivity, we propose that these two maps form a distinct visual field map cluster and perform related functions. The dorsal pulvinar was most strongly connected with parietal and frontal areas. The functional and anatomical organization observed within the human pulvinar was similar to the organization of the pulvinar in other primate species. SIGNIFICANCE STATEMENT The anatomical organization and basic response properties of the visual pulvinar have been extensively studied in nonhuman primates. Yet, relatively little is known about the functional and anatomical organization of the human pulvinar. Using neuroimaging, we found multiple representations of visual space within the ventral human pulvinar and extensive topographically organized connectivity with visual cortex. This organization is similar to other nonhuman primates and provides additional support that the general organization of the pulvinar is consistent across the primate phylogenetic tree. These results suggest that the human pulvinar, like other primates, is well positioned to regulate corticocortical communication. PMID:26156987
Thiessen, Amber; Brown, Jessica; Beukelman, David; Hux, Karen
2017-09-01
Photographs are a frequently employed tool for the rehabilitation of adults with traumatic brain injury (TBI). Speech-language pathologists (SLPs) working with these individuals must select photos that are easily identifiable and meaningful to their clients. In this investigation, we examined the visual attention response to camera- (i.e., depicted human figure looking toward camera) and task-engaged (i.e., depicted human figure looking at and touching an object) contextual photographs for a group of adults with TBI and a group of adults without neurological conditions. Eye-tracking technology served to accurately and objectively measure visual fixations. Although differences were hypothesized given the cognitive deficits associated with TBI, study results revealed little difference in the visual fixation patterns of adults with and without TBI. Specifically, both groups of participants tended to fixate rapidly on the depicted human figure and fixate more on objects in which a human figure was task-engaged than when a human figure was camera-engaged. These results indicate that strategic placement of human figures in a contextual photograph may modify the way in which individuals with TBI visually attend to and interpret photographs. In addition, task-engagement appears to have a guiding effect on visual attention that may be of benefit to SLPs hoping to select more effective contextual photographs for their clients with TBI. Finally, the limited differences in visual attention patterns between individuals with TBI and their age and gender matched peers without neurological impairments indicates that these two groups find similar photograph regions to be worthy of visual fixation. Readers will gain knowledge regarding the photograph selection process for individuals with TBI. In addition, readers will be able to identify camera- and task-engaged photographs and to explain why task-engagement may be a beneficial component of contextual photographs. Copyright © 2017 Elsevier Inc. All rights reserved.
Hoffmann, M B; Kaule, F; Grzeschik, R; Behrens-Baumann, W; Wolynski, B
2011-07-01
Since its initial introduction in the mid-1990 s, retinotopic mapping of the human visual cortex, based on functional magnetic resonance imaging (fMRI), has contributed greatly to our understanding of the human visual system. Multiple cortical visual field representations have been demonstrated and thus numerous visual areas identified. The organisation of specific areas has been detailed and the impact of pathophysiologies of the visual system on the cortical organisation uncovered. These results are based on investigations at a magnetic field strength of 3 Tesla or less. In a field-strength comparison between 3 and 7 Tesla, it was demonstrated that retinotopic mapping benefits from a magnetic field strength of 7 Tesla. Specifically, the visual areas can be mapped with high spatial resolution for a detailed analysis of the visual field maps. Applications of fMRI-based retinotopic mapping in ophthalmological research hold promise to further our understanding of plasticity in the human visual cortex. This is highlighted by pioneering studies in patients with macular dysfunction or misrouted optic nerves. © Georg Thieme Verlag KG Stuttgart · New York.
ERIC Educational Resources Information Center
Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto
2012-01-01
Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more…
Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability.
van Dijk, Hanneke; Schoffelen, Jan-Mathijs; Oostenveld, Robert; Jensen, Ole
2008-02-20
Although the resting and baseline states of the human electroencephalogram and magnetoencephalogram (MEG) are dominated by oscillations in the alpha band (approximately 10 Hz), the functional role of these oscillations remains unclear. In this study we used MEG to investigate how spontaneous oscillations in humans presented before visual stimuli modulate visual perception. Subjects had to report if there was a subtle difference in gray levels between two superimposed presented discs. We then compared the prestimulus brain activity for correctly (hits) versus incorrectly (misses) identified stimuli. We found that visual discrimination ability decreased with an increase in prestimulus alpha power. Given that reaction times did not vary systematically with prestimulus alpha power changes in vigilance are not likely to explain the change in discrimination ability. Source reconstruction using spatial filters allowed us to identify the brain areas accounting for this effect. The dominant sources modulating visual perception were localized around the parieto-occipital sulcus. We suggest that the parieto-occipital alpha power reflects functional inhibition imposed by higher level areas, which serves to modulate the gain of the visual stream.
Cleared for the visual approach: Human factor problems in air carrier operations
NASA Technical Reports Server (NTRS)
Monan, W. P.
1983-01-01
The study described herein, a set of 353 ASRS reports of unique aviation occurrences significantly involving visual approaches was examined to identify hazards and pitfalls embedded in the visual approach procedure and to consider operational practices that might help avoid future mishaps. Analysis of the report set identified nine aspects of the visual approach procedure that appeared to be predisposing conditions for inducing or exacerbating the effects of operational errors by flight crew members or controllers. Predisposing conditions, errors, and operational consequences of the errors are discussed. In a summary, operational policies that might mitigate the problems are examined.
Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays
NASA Astrophysics Data System (ADS)
Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko
The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.
ERIC Educational Resources Information Center
Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu
2009-01-01
The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…
A simpler primate brain: the visual system of the marmoset monkey
Solomon, Samuel G.; Rosa, Marcello G. P.
2014-01-01
Humans are diurnal primates with high visual acuity at the center of gaze. Although primates share many similarities in the organization of their visual centers with other mammals, and even other species of vertebrates, their visual pathways also show unique features, particularly with respect to the organization of the cerebral cortex. Therefore, in order to understand some aspects of human visual function, we need to study non-human primate brains. Which species is the most appropriate model? Macaque monkeys, the most widely used non-human primates, are not an optimal choice in many practical respects. For example, much of the macaque cerebral cortex is buried within sulci, and is therefore inaccessible to many imaging techniques, and the postnatal development and lifespan of macaques are prohibitively long for many studies of brain maturation, plasticity, and aging. In these and several other respects the marmoset, a small New World monkey, represents a more appropriate choice. Here we review the visual pathways of the marmoset, highlighting recent work that brings these advantages into focus, and identify where additional work needs to be done to link marmoset brain organization to that of macaques and humans. We will argue that the marmoset monkey provides a good subject for studies of a complex visual system, which will likely allow an important bridge linking experiments in animal models to humans. PMID:25152716
Biometric Research in Perception and Neurology Related to the Study of Visual Communication.
ERIC Educational Resources Information Center
Metallinos, Nikos
Contemporary research findings in the fields of perceptual psychology and neurology of the human brain that are directly related to the study of visual communication are reviewed and briefly discussed in this paper. Specifically, the paper identifies those major research findings in visual perception that are relevant to the study of visual…
ERIC Educational Resources Information Center
Williamson, Jack
1995-01-01
Argues that the practice and influence of design history can benefit from new forms of visual and chronological analysis. Identifies and discusses a unique phenomenon, the "historical visual narrative." Examines special instances of this phenomenon in twentieth-century design and visual culture, which are tied to the theme of the…
Image quality metrics for volumetric laser displays
NASA Astrophysics Data System (ADS)
Williams, Rodney D.; Donohoo, Daniel
1991-08-01
This paper addresses the extensions to the image quality metrics and related human factors research that are needed to establish the baseline standards for emerging volume display technologies. The existing and recently developed technologies for multiplanar volume displays are reviewed with an emphasis on basic human visual issues. Human factors image quality metrics and guidelines are needed to firmly establish this technology in the marketplace. The human visual requirements and the display design tradeoffs for these prototype laser-based volume displays are addressed and several critical image quality issues identified for further research. The American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSIHFS-100) and other international standards (ISO, DIN) can serve as a starting point, but this research base must be extended to provide new image quality metrics for this new technology for volume displays.
Natural Tendency towards Beauty in Humans: Evidence from Binocular Rivalry.
Mo, Ce; Xia, Tiansheng; Qin, Kaixin; Mo, Lei
2016-01-01
Although human preference for beauty is common and compelling in daily life, it remains unknown whether such preference is essentially subserved by social cognitive demands or natural tendency towards beauty encoded in the human mind intrinsically. Here we demonstrate experimentally that humans automatically exhibit preference for visual and moral beauty without explicit cognitive efforts. Using a binocular rivalry paradigm, we identified enhanced gender-independent perceptual dominance for physically attractive persons, and the results suggested universal preference for visual beauty based on perceivable forms. Moreover, we also identified perceptual dominance enhancement for characters associated with virtuous descriptions after controlling for facial attractiveness and vigilance-related attention effects, which suggested a similar implicit preference for moral beauty conveyed in prosocial behaviours. Our findings show that behavioural preference for beauty is driven by an inherent natural tendency towards beauty in humans rather than explicit social cognitive processes.
Natural Tendency towards Beauty in Humans: Evidence from Binocular Rivalry
Mo, Lei
2016-01-01
Although human preference for beauty is common and compelling in daily life, it remains unknown whether such preference is essentially subserved by social cognitive demands or natural tendency towards beauty encoded in the human mind intrinsically. Here we demonstrate experimentally that humans automatically exhibit preference for visual and moral beauty without explicit cognitive efforts. Using a binocular rivalry paradigm, we identified enhanced gender-independent perceptual dominance for physically attractive persons, and the results suggested universal preference for visual beauty based on perceivable forms. Moreover, we also identified perceptual dominance enhancement for characters associated with virtuous descriptions after controlling for facial attractiveness and vigilance-related attention effects, which suggested a similar implicit preference for moral beauty conveyed in prosocial behaviours. Our findings show that behavioural preference for beauty is driven by an inherent natural tendency towards beauty in humans rather than explicit social cognitive processes. PMID:26930202
Human Factors in Streaming Data Analysis: Challenges and Opportunities for Information Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey
State-of-the-art visual analytics models and frameworks mostly assume a static snapshot of the data, while in many cases it is a stream with constant updates and changes. Exploration of streaming data poses unique challenges as machine-level computations and abstractions need to be synchronized with the visual representation of the data and the temporally evolving human insights. In the visual analytics literature, we lack a thorough characterization of streaming data and analysis of the challenges associated with task abstraction, visualization design, and adaptation of the role of human-in-the-loop for exploration of data streams. We aim to fill this gap by conductingmore » a survey of the state-of-the-art in visual analytics of streaming data for systematically describing the contributions and shortcomings of current techniques and analyzing the research gaps that need to be addressed in the future. Our contributions are: i) problem characterization for identifying challenges that are unique to streaming data analysis tasks, ii) a survey and analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space for dynamic data and the role of the human-in-the-loop, and iii) reflections on the design-trade-offs for streaming visual analytics techniques and their practical applicability in real-world application scenarios.« less
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Arendt, Dustin L.; Franklin, Lyndsey R.
Real-world systems change continuously and across domains like traffic monitoring, cyber security, etc., such changes occur within short time scales. This leads to a streaming data problem and produces unique challenges for the human in the loop, as analysts have to ingest and make sense of dynamic patterns in real time. In this paper, our goal is to study how the state-of-the-art in streaming data visualization handles these challenges and reflect on the gaps and opportunities. To this end, we have three contributions: i) problem characterization for identifying domain-specific goals and challenges for handling streaming data, ii) a survey andmore » analysis of the state-of-the-art in streaming data visualization research with a focus on the visualization design space, and iii) reflections on the perceptually motivated design challenges and potential research directions for addressing them.« less
A computational visual saliency model based on statistics and machine learning.
Lin, Ru-Je; Lin, Wei-Song
2014-08-01
Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.
2017-01-01
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553
Zhu, Lin L; Beauchamp, Michael S
2017-03-08
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.
Perceptual evaluation of visual alerts in surveillance videos
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Topkara, Mercan; Pfeiffer, William; Hampapur, Arun
2015-03-01
Visual alerts are commonly used in video monitoring and surveillance systems to mark events, presumably making them more salient to human observers. Surprisingly, the effectiveness of computer-generated alerts in improving human performance has not been widely studied. To address this gap, we have developed a tool for simulating different alert parameters in a realistic visual monitoring situation, and have measured human detection performance under conditions that emulated different set-points in a surveillance algorithm. In the High-Sensitivity condition, the simulated alerts identified 100% of the events with many false alarms. In the Lower-Sensitivity condition, the simulated alerts correctly identified 70% of the targets, with fewer false alarms. In the control condition, no simulated alerts were provided. To explore the effects of learning, subjects performed these tasks in three sessions, on separate days, in a counterbalanced, within subject design. We explore these results within the context of cognitive models of human attention and learning. We found that human observers were more likely to respond to events when marked by a visual alert. Learning played a major role in the two alert conditions. In the first session, observers generated almost twice as many False Alarms as in the No-Alert condition, as the observers responded pre-attentively to the computer-generated false alarms. However, this rate dropped equally dramatically in later sessions, as observers learned to discount the false cues. Highest observer Precision, Hits/(Hits + False Alarms), was achieved in the High Sensitivity condition, but only after training. The successful evaluation of surveillance systems depends on understanding human attention and performance.
Pennsylvania Classroom Guide to Safety in the Visual Arts.
ERIC Educational Resources Information Center
Oltman, Debra L.
Exposure to certain art materials can damage the human body. Some of these materials are identified together with factors that influence exposure, including duration, frequency, and environmental conditions. Responsibility for providing a safe working environment for the creation of visual arts in the classroom lies with the instructor, principal,…
NASA Technical Reports Server (NTRS)
Shields, N., Jr.; Piccione, F.; Kirkpatrick, M., III; Malone, T. B.
1982-01-01
The combination of human and machine capabilities into an integrated engineering system which is complex and interactive interdisciplinary undertaking is discussed. Human controlled remote systems referred to as teleoperators, are reviewed. The human factors requirements for remotely manned systems are identified. The data were developed in three principal teleoperator laboratories and the visual, manipulator and mobility laboratories are described. Three major sections are identified: (1) remote system components, (2) human operator considerations; and (3) teleoperator system simulation and concept verification.
Endogenous modulation of human visual cortex activity improves perception at twilight.
Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A
2018-04-10
Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.
Leising, Kenneth J; Elmore, L Caitlin; Rivera, Jacquelyne J; Magnotti, John F; Katz, Jeffrey S; Wright, Anthony A
2013-09-01
Change detection is commonly used to assess capacity (number of objects) of human visual short-term memory (VSTM). Comparisons with the performance of non-human animals completing similar tasks have shown similarities and differences in object-based VSTM, which is only one aspect ("what") of memory. Another important aspect of memory, which has received less attention, is spatial short-term memory for "where" an object is in space. In this article, we show for the first time that a monkey and pigeons can be accurately trained to identify location changes, much as humans do, in change detection tasks similar to those used to test object capacity of VSTM. The subject's task was to identify (touch/peck) an item that changed location across a brief delay. Both the monkey and pigeons showed transfer to delays longer than the training delay, to greater and smaller distance changes than in training, and to novel colors. These results are the first to demonstrate location-change detection in any non-human species and encourage comparative investigations into the nature of spatial and visual short-term memory.
Nilsson, Gunnar; Zary, Nabil
2014-01-01
Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research. PMID:25469323
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2014-01-01
Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research.
Integrating Spaceflight Human System Risk Research
NASA Technical Reports Server (NTRS)
Mindock, J.; Lumpkins, S.; Anton, W.; Havenhill, M.; Shelhamer, M.; Canga, M.
2016-01-01
NASA is working to increase the likelihoods of human health and performance success during exploration missions, and subsequent crew long-term health. To manage the risks in achieving these goals, a system modeled after a Continuous Risk Management framework is in place. "Human System Risks" (Risks) have been identified, and approximately 30 are being actively addressed by NASA's Human Research Program (HRP). Research plans for each of HRP's Risks have been developed and are being executed. Ties between the research efforts supporting each Risk have been identified, however, this has been in an ad hoc fashion. There is growing recognition that solutions developed to address the full set of Risks covering medical, physiological, behavioral, vehicle, and organizational aspects of the exploration missions must be integrated across Risks and disciplines. We will discuss how a framework of factors influencing human health and performance in space is being applied as the backbone for bringing together sometimes disparate information relevant to the individual Risks. The resulting interrelated information is allowing us to identify and visualize connections between Risks and research efforts in a systematic and standardized way. We will discuss the applications of the visualizations and insights to research planning, solicitation, and decision-making processes.
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.
Crouser, R J; Chang, R
2012-12-01
Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.
Similarities in human visual and declared measures of preference for opposite-sex faces.
Griffey, Jack A F; Little, Anthony C
2014-01-01
Facial appearance in humans is associated with attraction and mate choice. Numerous studies have identified that adults display directional preferences for certain facial traits including symmetry, averageness, and sexually dimorphic traits. Typically, studies measuring human preference for these traits examine declared (e.g., choice or ratings of attractiveness) or visual preferences (e.g., looking time) of participants. However, the extent to which visual and declared preferences correspond remains relatively untested. In order to evaluate the relationship between these measures we examined visual and declared preferences displayed by men and women for opposite-sex faces manipulated across three dimensions (symmetry, averageness, and masculinity) and compared preferences from each method. Results indicated that participants displayed significant visual and declared preferences for symmetrical, average, and appropriately sexually dimorphic faces. We also found that declared and visual preferences correlated weakly but significantly. These data indicate that visual and declared preferences for manipulated facial stimuli produce similar directional preferences across participants and are also correlated with one another within participants. Both methods therefore may be considered appropriate to measure human preferences. However, while both methods appear likely to generate similar patterns of preference at the sample level, the weak nature of the correlation between visual and declared preferences in our data suggests some caution in assuming visual preferences are the same as declared preferences at the individual level. Because there are positive and negative factors in both methods for measuring preference, we suggest that a combined approach is most useful in outlining population level preferences for traits.
Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J
2009-10-01
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
NASA Astrophysics Data System (ADS)
Myers, Robert Gardner
1997-12-01
The purpose of this study was to determine whether there is a correlation between the cognitive style of field dependence and the type of visual presentation format used in a computer-based tutorial (color; black and white: or line drawings) when subjects are asked to identify human tissue samples. Two hundred-four college students enrolled in human anatomy and physiology classes at Westmoreland County Community College participated. They were first administered the Group Embedded Figures Test (GEFT) and then were divided into three groups: field-independent (score, 15-18), field-neutral (score, 11-14), and field dependent (score, 0-10). Subjects were randomly assigned to one of the three treatment groups. Instruction was delivered by means of a computer-aided tutorial consisting of text and visuals of human tissue samples. The pretest and posttest consisted of 15 tissue samples, five from each treatment, that were imported into the HyperCardsp{TM} stack and were played using QuickTimesp{TM} movie extensions. A two-way analysis of covariance (ANCOVA) using pretest and posttest scores was used to investigate whether there is a relationship between field dependence and each of the three visual presentation formats. No significant interaction was found between individual subject's relative degree of field dependence and any of the different visual presentation formats used in the computer-aided tutorial module, F(4,194) = 1.78, p =.1335. There was a significant difference between the students' levels of field dependence in terms of their ability to identify human tissue samples, F(2,194) = 5.83, p =.0035. Field-independent subjects scored significantly higher (M = 10.59) on the posttest than subjects who were field-dependent (M = 9.04). There was also a significant difference among the various visual presentation formats, F(2,194) = 3.78, p =.0245. Subjects assigned to the group that received the color visual presentation format scored significantly higher (M = 10.38) on the posttest measure than did those assigned to the group that received the line drawing visual presentation format (8.99).
Duncan, Robert O; Sample, Pamela A; Bowd, Christopher; Weinreb, Robert N; Zangwill, Linda M
2012-05-01
Altered metabolic activity has been identified as a potential contributing factor to the neurodegeneration associated with primary open angle glaucoma (POAG). Consequently, we sought to determine whether there is a relationship between the loss of visual function in human glaucoma and resting blood perfusion within primary visual cortex (V1). Arterial spin labeling (ASL) functional magnetic resonance imaging (fMRI) was conducted in 10 participants with POAG. Resting cerebral blood flow (CBF) was measured from dorsal and ventral V1. Behavioral measurements of visual function were obtained using standard automated perimetry (SAP), short-wavelength automated perimetry (SWAP), and frequency-doubling technology perimetry (FDT). Measurements of CBF were compared to differences in visual function for the superior and inferior hemifield. Differences in CBF between ventral and dorsal V1 were correlated with differences in visual function for the superior versus inferior visual field. A statistical bootstrapping analysis indicated that the observed correlations between fMRI responses and measurements of visual function for SAP (r=0.49), SWAP (r=0.63), and FDT (r=0.43) were statistically significant (all p<0.05). Resting blood perfusion in human V1 is correlated with the loss of visual function in POAG. Altered CBF may be a contributing factor to glaucomatous optic neuropathy, or it may be an indication of post-retinal glaucomatous neurodegeneration caused by damage to the retinal ganglion cells. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Relationship Between Human Nucleolar Organizer Regions and Nucleoli, Probed by 3D-ImmunoFISH.
van Sluis, Marjolein; van Vuuren, Chelly; McStay, Brian
2016-01-01
3D-immunoFISH is a valuable technique to compare the localization of DNA sequences and proteins in cells where three-dimensional structure has been preserved. As nucleoli contain a multitude of protein factors dedicated to ribosome biogenesis and form around specific chromosomal loci, 3D-immunoFISH is a particularly relevant technique for their study. In human cells, nucleoli form around transcriptionally active ribosomal gene (rDNA) arrays termed nucleolar organizer regions (NORs) positioned on the p-arms of each of the acrocentric chromosomes. Here, we provide a protocol for fixing and permeabilizing human cells grown on microscope slides such that nucleolar proteins can be visualized using antibodies and NORs visualized by DNA FISH. Antibodies against UBF recognize transcriptionally active rDNA/NORs and NOP52 antibodies provide a convenient way of visualizing the nucleolar volume. We describe a probe designed to visualize rDNA and introduce a probe comprised of NOR distal sequences, which can be used to identify or count individual NORs.
An Automated Classification Technique for Detecting Defects in Battery Cells
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth
2006-01-01
Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.
The visual analysis of emotional actions.
Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie
2006-01-01
Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.
Parietal and superior frontal visuospatial maps activated by pointing and saccades
Hagler, D.J.; Riecke, L.; Sereno, M.I.
2009-01-01
A recent study from our laboratory demonstrated that parietal cortex contains a map of visual space related to saccades and spatial attention and identified this area as the likely human homologue of the lateral intraparietal (LIP). A human homologue for the parietal reach region (PRR), thought to preferentially encode planned hand movements, has also been recently proposed. Both of these areas, originally identified in the macaque monkey, have been shown to encode space with eye-centered coordinates. Functional magnetic resonance imaging (fMRI) of humans was used to test the hypothesis that the putative human PRR contains a retinotopic map recruited by finger pointing but not saccades and to test more generally for differences in the visuospatial maps recruited by pointing and saccades. We identified multiple maps in both posterior parietal cortex and superior frontal cortex recruited for eye and hand movements, including maps not observed in previous mapping studies. Pointing and saccade maps were generally consistent within single subjects. We have developed new group analysis methods for phase-encoded data, which revealed subtle differences between pointing and saccades, including hemispheric asymmetries, but we did not find evidence of pointing-specific maps of visual space. PMID:17376706
Integrating Spaceflight Human System Risk Research
NASA Technical Reports Server (NTRS)
Mindock, Jennifer; Lumpkins, Sarah; Anton, Wilma; Havenhill, Maria; Shelhamer, Mark; Canga, Michael
2016-01-01
NASA is working to increase the likelihood of human health and performance success during exploration missions as well as to maintain the subsequent long-term health of the crew. To manage the risks in achieving these goals, a system modelled after a Continuous Risk Management framework is in place. "Human System Risks" (Risks) have been identified, and approximately 30 are being actively addressed by NASA's Human Research Program (HRP). Research plans for each of HRP's Risks have been developed and are being executed. Inter-disciplinary ties between the research efforts supporting each Risk have been identified; however, efforts to identify and benefit from these connections have been mostly ad hoc. There is growing recognition that solutions developed to address the full set of Risks covering medical, physiological, behavioural, vehicle, and organizational aspects of exploration missions must be integrated across Risks and disciplines. This paper discusses how a framework of factors influencing human health and performance in space is being applied as the backbone for bringing together sometimes disparate information relevant to the individual Risks. The resulting interrelated information enables identification and visualization of connections between Risks and research efforts in a systematic and standardized manner. This paper also discusses the applications of the visualizations and insights into research planning, solicitation, and decision-making processes.
Learning and Recognition of a Non-conscious Sequence of Events in Human Primary Visual Cortex.
Rosenthal, Clive R; Andrews, Samantha K; Antoniades, Chrystalina A; Kennard, Christopher; Soto, David
2016-03-21
Human primary visual cortex (V1) has long been associated with learning simple low-level visual discriminations [1] and is classically considered outside of neural systems that support high-level cognitive behavior in contexts that differ from the original conditions of learning, such as recognition memory [2, 3]. Here, we used a novel fMRI-based dichoptic masking protocol-designed to induce activity in V1, without modulation from visual awareness-to test whether human V1 is implicated in human observers rapidly learning and then later (15-20 min) recognizing a non-conscious and complex (second-order) visuospatial sequence. Learning was associated with a change in V1 activity, as part of a temporo-occipital and basal ganglia network, which is at variance with the cortico-cerebellar network identified in prior studies of "implicit" sequence learning that involved motor responses and visible stimuli (e.g., [4]). Recognition memory was associated with V1 activity, as part of a temporo-occipital network involving the hippocampus, under conditions that were not imputable to mechanisms associated with conscious retrieval. Notably, the V1 responses during learning and recognition separately predicted non-conscious recognition memory, and functional coupling between V1 and the hippocampus was enhanced for old retrieval cues. The results provide a basis for novel hypotheses about the signals that can drive recognition memory, because these data (1) identify human V1 with a memory network that can code complex associative serial visuospatial information and support later non-conscious recognition memory-guided behavior (cf. [5]) and (2) align with mouse models of experience-dependent V1 plasticity in learning and memory [6]. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aesthetic Response and Cosmic Aesthetic Distance
NASA Astrophysics Data System (ADS)
Madacsi, D.
2013-04-01
For Homo sapiens, the experience of a primal aesthetic response to nature was perhaps a necessary precursor to the arousal of an artistic impulse. Among the likely visual candidates for primal initiators of aesthetic response, arguments can be made in favor of the flower, the human face and form, and the sky and light itself as primordial aesthetic stimulants. Although visual perception of the sensory world of flowers and human faces and forms is mediated by light, it was most certainly in the sky that humans first could respond to the beauty of light per se. It is clear that as a species we do not yet identify and comprehend as nature, or part of nature, the entire universe beyond our terrestrial environs, the universe from which we remain inexorably separated by space and time. However, we now enjoy a technologically-enabled opportunity to probe the ultimate limits of visual aesthetic distance and the origins of human aesthetic response as we remotely explore deep space via the Hubble Space Telescope and its successors.
Temporal Progression of Visual Injury from Blast Exposure
2017-09-01
seen throughout the duration of the study. To correlate experimental blast exposures in rodents to human blast exposures, a computational parametric...software (JMP 10.0, Cary,NC). Descriptive and univariate analyses will first be performed to identify the occurrence of delayed visual system...later). The biostatistician evaluating the retrospective data has completed the descriptive analysis and is working on the multiple regression. Table
A Simplified Method of Identifying the Trained Retinal Locus for Training in Eccentric Viewing
ERIC Educational Resources Information Center
Vukicevic, Meri; Le, Anh; Baglin, James
2012-01-01
In the typical human visual system, the macula allows for high visual resolution. Damage to this area from diseases, such as age-related macular degeneration (AMD), causes the loss of central vision in the form of a central scotoma. Since no treatment is available to reverse AMD, providing low vision rehabilitation to compensate for the loss of…
ViA: a perceptual visualization assistant
NASA Astrophysics Data System (ADS)
Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.
2000-05-01
This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.
Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets
Morvan, Camille; Maloney, Laurence T.
2012-01-01
Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428
Atoms of recognition in human and computer vision.
Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel
2016-03-08
Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.
Pinto, Joshua G. A.; Jones, David G.; Williams, C. Kate; Murphy, Kathryn M.
2015-01-01
Although many potential neuroplasticity based therapies have been developed in the lab, few have translated into established clinical treatments for human neurologic or neuropsychiatric diseases. Animal models, especially of the visual system, have shaped our understanding of neuroplasticity by characterizing the mechanisms that promote neural changes and defining timing of the sensitive period. The lack of knowledge about development of synaptic plasticity mechanisms in human cortex, and about alignment of synaptic age between animals and humans, has limited translation of neuroplasticity therapies. In this study, we quantified expression of a set of highly conserved pre- and post-synaptic proteins (Synapsin, Synaptophysin, PSD-95, Gephyrin) and found that synaptic development in human primary visual cortex (V1) continues into late childhood. Indeed, this is many years longer than suggested by neuroanatomical studies and points to a prolonged sensitive period for plasticity in human sensory cortex. In addition, during childhood we found waves of inter-individual variability that are different for the four proteins and include a stage during early development (<1 year) when only Gephyrin has high inter-individual variability. We also found that pre- and post-synaptic protein balances develop quickly, suggesting that maturation of certain synaptic functions happens within the 1 year or 2 of life. A multidimensional analysis (principle component analysis) showed that most of the variance was captured by the sum of the four synaptic proteins. We used that sum to compare development of human and rat visual cortex and identified a simple linear equation that provides robust alignment of synaptic age between humans and rats. Alignment of synaptic ages is important for age-appropriate targeting and effective translation of neuroplasticity therapies from the lab to the clinic. PMID:25729353
NASA Astrophysics Data System (ADS)
Zhao, Yiqun; Wang, Zhihui
2015-12-01
The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.
Boy with cortical visual impairment and unilateral hemiparesis in Jeff Huntington's "Slip" (2011).
Bianucci, R; Perciaccante, A; Appenzeller, O
2016-11-15
Face recognition is strongly associated with the human face and face perception is an important part in identifying health qualities of a person and is an integral part of so called spot diagnosis in clinical neurology. Neurology depends in part on observation, description and interpretation of visual information. Similar skills are required in visual art. Here we report a case of eye cortical visual impairment (CVI) and unilateral facial weakness in a boy depicted by the painter Jeff Huntington (2011). The corollary of this is that art serves medical clinical exercise. Art interpretation helps neurology students to apply the same skills they will use in clinical experience and to develop their observational and interpretive skills in non-clinical settings. Furthermore, the development of an increased awareness of emotional and character expression in the human face may facilitate successful doctor-patient relationships. Copyright © 2016 Elsevier B.V. All rights reserved.
Generic decoding of seen and imagined objects using hierarchical visual features.
Horikawa, Tomoyasu; Kamitani, Yukiyasu
2017-05-22
Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.
Camouflage and visual perception
Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt
2008-01-01
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671
Concepts to Support HRP Integration Using Publications and Modeling
NASA Technical Reports Server (NTRS)
Mindock, J.; Lumpkins, S.; Shelhamer, M.
2014-01-01
Initial efforts are underway to enhance the Human Research Program (HRP)'s identification and support of potential cross-disciplinary scientific collaborations. To increase the emphasis on integration in HRP's science portfolio management, concepts are being explored through the development of a set of tools. These tools are intended to enable modeling, analysis, and visualization of the state of the human system in the spaceflight environment; HRP's current understanding of that state with an indication of uncertainties; and how that state changes due to HRP programmatic progress and design reference mission definitions. In this talk, we will discuss proof-of-concept work performed using a subset of publications captured in the HRP publications database. The publications were tagged in the database with words representing factors influencing health and performance in spaceflight, as well as with words representing the risks HRP research is reducing. Analysis was performed on the publication tag data to identify relationships between factors and between risks. Network representations were then created as one type of visualization of these relationships. This enables future analyses of the structure of the networks based on results from network theory. Such analyses can provide insights into HRP's current human system knowledge state as informed by the publication data. The network structure analyses can also elucidate potential improvements by identifying network connections to establish or strengthen for maximized information flow. The relationships identified in the publication data were subsequently used as inputs to a model captured in the Systems Modeling Language (SysML), which functions as a repository for relationship information to be gleaned from multiple sources. Example network visualization outputs from a simple SysML model were then also created to compare to the visualizations based on the publication data only. We will also discuss ideas for building upon this proof-of-concept work to further support an integrated approach to human spaceflight risk reduction.
Resolving human object recognition in space and time
Cichy, Radoslaw Martin; Pantazis, Dimitrios; Oliva, Aude
2014-01-01
A comprehensive picture of object processing in the human brain requires combining both spatial and temporal information about brain activity. Here, we acquired human magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) responses to 92 object images. Multivariate pattern classification applied to MEG revealed the time course of object processing: whereas individual images were discriminated by visual representations early, ordinate and superordinate category levels emerged relatively later. Using representational similarity analysis, we combine human fMRI and MEG to show content-specific correspondence between early MEG responses and primary visual cortex (V1), and later MEG responses and inferior temporal (IT) cortex. We identified transient and persistent neural activities during object processing, with sources in V1 and IT., Finally, human MEG signals were correlated to single-unit responses in monkey IT. Together, our findings provide an integrated space- and time-resolved view of human object categorization during the first few hundred milliseconds of vision. PMID:24464044
NASA Astrophysics Data System (ADS)
Chu, Zhongdi; Chen, Chieh-Li; Zhang, Qinqin; Pepple, Kathryn; Durbin, Mary; Gregori, Giovanni; Wang, Ruikang K.
2017-12-01
The choriocapillaris (CC) plays an essential role in maintaining the normal functions of the human eye. There is increasing interest in the community to develop an imaging technique for visualizing the CC, yet this remains underexplored due to technical limitations. We propose an approach for the visualization of the CC in humans via a complex signal-based optical microangiography (OMAG) algorithm, based on commercially available spectral domain optical coherence tomography (SD-OCT). We show that the complex signal-based OMAG was superior to both the phase and amplitude signal-based approaches in detailing the vascular lobules previously seen with histological analysis. With this improved ability to visualize the lobular vascular networks, it is possible to identify the feeding arterioles and draining venules around the lobules, which is important in understanding the role of the CC in the pathogenesis of ocular diseases. With built-in FastTrac™ and montage scanning capabilities, we also demonstrate wide-field SD-OCT angiograms of the CC with a field of view at 9×11 mm2.
A multi-pathway hypothesis for human visual fear signaling
Silverstein, David N.; Ingvar, Martin
2015-01-01
A hypothesis is proposed for five visual fear signaling pathways in humans, based on an analysis of anatomical connectivity from primate studies and human functional connectvity and tractography from brain imaging studies. Earlier work has identified possible subcortical and cortical fear pathways known as the “low road” and “high road,” which arrive at the amygdala independently. In addition to a subcortical pathway, we propose four cortical signaling pathways in humans along the visual ventral stream. All four of these traverse through the LGN to the visual cortex (VC) and branching off at the inferior temporal area, with one projection directly to the amygdala; another traversing the orbitofrontal cortex; and two others passing through the parietal and then prefrontal cortex, one excitatory pathway via the ventral-medial area and one regulatory pathway via the ventral-lateral area. These pathways have progressively longer propagation latencies and may have progressively evolved with brain development to take advantage of higher-level processing. Using the anatomical path lengths and latency estimates for each of these five pathways, predictions are made for the relative processing times at selective ROIs and arrival at the amygdala, based on the presentation of a fear-relevant visual stimulus. Partial verification of the temporal dynamics of this hypothesis might be accomplished using experimental MEG analysis. Possible experimental protocols are suggested. PMID:26379513
Integrating spaceflight human system risk research
NASA Astrophysics Data System (ADS)
Mindock, Jennifer; Lumpkins, Sarah; Anton, Wilma; Havenhill, Maria; Shelhamer, Mark; Canga, Michael
2017-10-01
NASA is working to increase the likelihood of exploration mission success and to maintain crew health, both during exploration missions and long term after return to Earth. To manage the risks in achieving these goals, a system modelled after a Continuous Risk Management framework is in place. ;Human System Risks; (Risks) have been identified, and 32 are currently being actively addressed by NASA's Human Research Program (HRP). Research plans for each of HRP's Risks have been developed and are being executed. Inter-disciplinary ties between the research efforts supporting each Risk have been identified; however, efforts to identify and benefit from these connections have been mostly ad hoc. There is growing recognition that solutions developed to address the full set of Risks covering medical, physiological, behavioural, vehicle, and organizational aspects of exploration missions must be integrated across Risks and disciplines. This paper discusses how a framework of factors influencing human health and performance in space is being applied as the backbone for bringing together sometimes disparate information relevant to the individual Risks. The resulting interrelated information enables identification and visualization of connections between Risks and research efforts in a systematic and standardized manner. This paper also discusses the applications of the visualizations and insights into research planning, solicitation, and decision-making processes.
Multiphoton gradient index endoscopy for evaluation of diseased human prostatic tissue ex vivo
NASA Astrophysics Data System (ADS)
Huland, David M.; Jain, Manu; Ouzounov, Dimitre G.; Robinson, Brian D.; Harya, Diana S.; Shevchuk, Maria M.; Singhal, Paras; Xu, Chris; Tewari, Ashutosh K.
2014-11-01
Multiphoton microscopy can instantly visualize cellular details in unstained tissues. Multiphoton probes with clinical potential have been developed. This study evaluates the suitability of multiphoton gradient index (GRIN) endoscopy as a diagnostic tool for prostatic tissue. A portable and compact multiphoton endoscope based on a 1-mm diameter, 8-cm length GRIN lens system probe was used. Fresh ex vivo samples were obtained from 14 radical prostatectomy patients and benign and malignant areas were imaged and correlated with subsequent H&E sections. Multiphoton GRIN endoscopy images of unfixed and unprocessed prostate tissue at a subcellular resolution are presented. We note several differences and identifying features of benign versus low-grade versus high-grade tumors and are able to identify periprostatic tissues such as adipocytes, periprostatic nerves, and blood vessels. Multiphoton GRIN endoscopy can be used to identify both benign and malignant lesions in ex vivo human prostate tissue and may be a valuable diagnostic tool for real-time visualization of suspicious areas of the prostate.
Alfred Walter Campbell and the visual functions of the occipital cortex.
Macmillan, Malcolm
2014-07-01
In his pioneering cytoarchitectonic studies of the human brain, Alfred Walter Campbell identified two structurally different areas in the occipital lobes and assigned two different kinds of visual functions to them. The first area, the visuosensory, was essentially on the mesial surface of the calcarine fissure. It was the terminus of nervous impulses generated in the retina and was where simple visual sensations arose. The second area, the visuopsychic, which surrounded or invested the first, was where sensations were interpreted and elaborated into visual perceptions. I argue that Campbell's distinction between the two areas was the starting point for the eventual differentiation of areas V1-V5. After a brief outline of Campbell's early life and education in Australia and of his Scottish medical education and early work as a pathologist at the Lancashire County Lunatic Asylum at Rainhill near Liverpool, I summarise his work on the human brain. In describing the structures he identified in the occipital lobes, I analyse the similarities and differences between them and the related structures identified by Joseph Shaw Bolton. I conclude by proposing some reasons for how that work came to be overshadowed by the later studies of Brodmann and for the more general lack of recognition given Campbell and his work. Those reasons include the effect of the controversies precipitated by Campbell's alliance with Charles Sherrington over the functions of the sensory and motor cortices. Copyright © 2012 Elsevier Ltd. All rights reserved.
The use of visual cues in gravity judgements on parabolic motion.
Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan
2018-06-21
Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.
Functional neuroanatomy of visual masking deficits in schizophrenia.
Green, Michael F; Lee, Junghee; Cohen, Mark S; Engel, Steven A; Korb, Alexander S; Nuechterlein, Keith H; Wynn, Jonathan K; Glahn, David C
2009-12-01
Visual masking procedures assess the earliest stages of visual processing. Patients with schizophrenia reliably show deficits on visual masking, and these procedures have been used to explore vulnerability to schizophrenia, probe underlying neural circuits, and help explain functional outcome. To identify and compare regional brain activity associated with one form of visual masking (ie, backward masking) in schizophrenic patients and healthy controls. Subjects received functional magnetic resonance imaging scans. While in the scanner, subjects performed a backward masking task and were given 3 functional localizer activation scans to identify early visual processing regions of interest (ROIs). University of California, Los Angeles, and the Department of Veterans Affairs Greater Los Angeles Healthcare System. Nineteen patients with schizophrenia and 19 healthy control subjects. Main Outcome Measure The magnitude of the functional magnetic resonance imaging signal during backward masking. Two ROIs (lateral occipital complex [LO] and the human motion selective cortex [hMT+]) showed sensitivity to the effects of masking, meaning that signal in these areas increased as the target became more visible. Patients had lower activation than controls in LO across all levels of visibility but did not differ in other visual processing ROIs. Using whole-brain analyses, we also identified areas outside the ROIs that were sensitive to masking effects (including bilateral inferior parietal lobe and thalamus), but groups did not differ in signal magnitude in these areas. The study results support a key role in LO for visual masking, consistent with previous studies in healthy controls. The current results indicate that patients fail to activate LO to the same extent as controls during visual processing regardless of stimulus visibility, suggesting a neural basis for the visual masking deficit, and possibly other visual integration deficits, in schizophrenia.
Ma, Jingqun; Brennan, Kaelan J; D'Aloia, Mitch R; Pascuzzi, Pete E; Weake, Vikki M
2016-08-09
The Spt-Ada-Gcn5 Acetyltransferase (SAGA) complex is a transcriptional coactivator with histone acetylase and deubiquitinase activities that plays an important role in visual development and function. In Drosophila melanogaster, four SAGA subunits are required for the deubiquitination of monoubiquitinated histone H2B (ubH2B): Nonstop, Sgf11, E(y)2, and Ataxin 7. Mutations that disrupt SAGA deubiquitinase activity cause defects in neuronal connectivity in the developing Drosophila visual system. In addition, mutations in SAGA result in the human progressive visual disorder spinocerebellar ataxia type 7 (SCA7). Glial cells play a crucial role in both the neuronal connectivity defect in nonstop and sgf11 flies, and in the retinal degeneration observed in SCA7 patients. Thus, we sought to identify the gene targets of SAGA deubiquitinase activity in glia in the Drosophila larval central nervous system. To do this, we enriched glia from wild-type, nonstop, and sgf11 larval optic lobes using affinity-purification of KASH-GFP tagged nuclei, and then examined each transcriptome using RNA-seq. Our analysis showed that SAGA deubiquitinase activity is required for proper expression of 16% of actively transcribed genes in glia, especially genes involved in proteasome function, protein folding and axon guidance. We further show that the SAGA deubiquitinase-activated gene Multiplexin (Mp) is required in glia for proper photoreceptor axon targeting. Mutations in the human ortholog of Mp, COL18A1, have been identified in a family with a SCA7-like progressive visual disorder, suggesting that defects in the expression of this gene in SCA7 patients could play a role in the retinal degeneration that is unique to this ataxia. Copyright © 2016 Ma et al.
Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu
2009-01-01
The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).
Premotor cortex is sensitive to auditory-visual congruence for biological motion.
Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F
2012-03-01
The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Image Fusion Algorithms Using Human Visual System in Transform Domain
NASA Astrophysics Data System (ADS)
Vadhi, Radhika; Swamy Kilari, Veera; Samayamantula, Srinivas Kumar
2017-08-01
The endeavor of digital image fusion is to combine the important visual parts from various sources to advance the visibility eminence of the image. The fused image has a more visual quality than any source images. In this paper, the Human Visual System (HVS) weights are used in the transform domain to select appropriate information from various source images and then to attain a fused image. In this process, mainly two steps are involved. First, apply the DWT to the registered source images. Later, identify qualitative sub-bands using HVS weights. Hence, qualitative sub-bands are selected from different sources to form high quality HVS based fused image. The quality of the HVS based fused image is evaluated with general fusion metrics. The results show the superiority among the state-of-the art resolution Transforms (MRT) such as Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT), Contourlet Transform (CT), and Non Sub Sampled Contourlet Transform (NSCT) using maximum selection fusion rule.
The role of temporal structure in human vision.
Blake, Randolph; Lee, Sang-Hun
2005-03-01
Gestalt psychologists identified several stimulus properties thought to underlie visual grouping and figure/ground segmentation, and among those properties was common fate: the tendency to group together individual objects that move together in the same direction at the same speed. Recent years have witnessed an upsurge of interest in visual grouping based on other time-dependent sources of visual information, including synchronized changes in luminance, in motion direction, and in figure/ ground relations. These various sources of temporal grouping information can be subsumed under the rubric temporal structure. In this article, the authors review evidence bearing on the effectiveness of temporal structure in visual grouping. They start with an overview of evidence bearing on temporal acuity of human vision, covering studies dealing with temporal integration and temporal differentiation. They then summarize psychophysical studies dealing with figure/ground segregation based on temporal phase differences in deterministic and stochastic events. The authors conclude with a brief discussion of neurophysiological implications of these results.
Simulating Navigation with Virtual 3d Geovisualizations - a Focus on Memory Related Factors
NASA Astrophysics Data System (ADS)
Lokka, I.; Çöltekin, A.
2016-06-01
The use of virtual environments (VE) for navigation-related studies, such as spatial cognition and path retrieval has been widely adopted in cognitive psychology and related fields. What motivates the use of VEs for such studies is that, as opposed to real-world, we can control for the confounding variables in simulated VEs. When simulating a geographic environment as a virtual world with the intention to train navigational memory in humans, an effective and efficient visual design is important to facilitate the amount of recall. However, it is not yet clear what amount of information should be included in such visual designs intended to facilitate remembering: there can be too little or too much of it. Besides the amount of information or level of detail, the types of visual features (`elements' in a visual scene) that should be included in the representations to create memorable scenes and paths must be defined. We analyzed the literature in cognitive psychology, geovisualization and information visualization, and identified the key factors for studying and evaluating geovisualization designs for their function to support and strengthen human navigational memory. The key factors we identified are: i) the individual abilities and age of the users, ii) the level of realism (LOR) included in the representations and iii) the context in which the navigation is performed, thus specific tasks within a case scenario. Here we present a concise literature review and our conceptual development for follow-up experiments.
2017-01-01
Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike’s information criterion (AICc) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis, the branchiopod water flea, Daphnia magna, normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus, which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei. The modeling approach presented here will be useful in selecting the most likely alternative hypotheses of opsin-based spectral photoreceptor classes, using relative opsin expression and extracellular electroretinography. PMID:28740757
Lessios, Nicolas
2017-01-01
Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike's information criterion (AIC c ) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis , the branchiopod water flea, Daphnia magna , normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus , which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei . The modeling approach presented here will be useful in selecting the most likely alternative hypotheses of opsin-based spectral photoreceptor classes, using relative opsin expression and extracellular electroretinography.
Orthographic processing in pigeons (Columba livia)
Scarf, Damian; Boy, Karoline; Uber Reinert, Anelisie; Devine, Jack; Güntürkün, Onur; Colombo, Michael
2016-01-01
Learning to read involves the acquisition of letter–sound relationships (i.e., decoding skills) and the ability to visually recognize words (i.e., orthographic knowledge). Although decoding skills are clearly human-unique, given they are seated in language, recent research and theory suggest that orthographic processing may derive from the exaptation or recycling of visual circuits that evolved to recognize everyday objects and shapes in our natural environment. An open question is whether orthographic processing is limited to visual circuits that are similar to our own or a product of plasticity common to many vertebrate visual systems. Here we show that pigeons, organisms that separated from humans more than 300 million y ago, process words orthographically. Specifically, we demonstrate that pigeons trained to discriminate words from nonwords picked up on the orthographic properties that define words and used this knowledge to identify words they had never seen before. In addition, the pigeons were sensitive to the bigram frequencies of words (i.e., the common co-occurrence of certain letter pairs), the edit distance between nonwords and words, and the internal structure of words. Our findings demonstrate that visual systems organizationally distinct from the primate visual system can also be exapted or recycled to process the visual word form. PMID:27638211
Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto
2012-01-01
Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more precisely than classical cross-sectional images based on a two dimensional (2D) approach. Eighty participants were assigned to each experimental condition: 2D cross-sectional visualization vs. 3D volumetric visualization. Both groups were matched for age, gender, visual-spatial ability, and previous knowledge of neuroanatomy. Accuracy in identifying brain structures, execution time, and level of confidence in the response were taken as outcome measures. Moreover, interactive effects between the experimental conditions (2D vs. 3D) and factors such as level of competence (novice vs. expert), image modality (morphological and functional), and difficulty of the structures were analyzed. The percentage of correct answers (hit rate) and level of confidence in responses were significantly higher in the 3D visualization condition than in the 2D. In addition, the response time was significantly lower for the 3D visualization condition in comparison with the 2D. The interaction between the experimental condition (2D vs. 3D) and difficulty was significant, and the 3D condition facilitated the location of difficult images more than the 2D condition. 3D volumetric visualization helps to identify brain structures such as the hippocampus and amygdala, more accurately and rapidly than conventional 2D visualization. This paper discusses the implications of these results with regards to the learning process involved in neuroimaging interpretation. Copyright © 2012 American Association of Anatomists.
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
Correspondences between What Infants See and Know about Causal and Self-Propelled Motion
ERIC Educational Resources Information Center
Cicchino, Jessica B.; Aslin, Richard N.; Rakison, David H.
2011-01-01
The associative learning account of how infants identify human motion rests on the assumption that this knowledge is derived from statistical regularities seen in the world. Yet, no catalog exists of what visual input infants receive of human motion, and of causal and self-propelled motion in particular. In this manuscript, we demonstrate that the…
A Neural Basis of Facial Action Recognition in Humans
Srinivasan, Ramprakash; Golomb, Julie D.
2016-01-01
By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment. PMID:27098688
Visual working memory is more tolerant than visual long-term memory.
Schurgin, Mark W; Flombaum, Jonathan I
2018-05-07
Human visual memory is tolerant, meaning that it supports object recognition despite variability across encounters at the image level. Tolerant object recognition remains one capacity in which artificial intelligence trails humans. Typically, tolerance is described as a property of human visual long-term memory (VLTM). In contrast, visual working memory (VWM) is not usually ascribed a role in tolerant recognition, with tests of that system usually demanding discriminatory power-identifying changes, not sameness. There are good reasons to expect that VLTM is more tolerant; functionally, recognition over the long-term must accommodate the fact that objects will not be viewed under identical conditions; and practically, the passive and massive nature of VLTM may impose relatively permissive criteria for thinking that two inputs are the same. But empirically, tolerance has never been compared across working and long-term visual memory. We therefore developed a novel paradigm for equating encoding and test across different memory types. In each experiment trial, participants saw two objects, memory for one tested immediately (VWM) and later for the other (VLTM). VWM performance was better than VLTM and remained robust despite the introduction of image and object variability. In contrast, VLTM performance suffered linearly as more variability was introduced into test stimuli. Additional experiments excluded interference effects as causes for the observed differences. These results suggest the possibility of a previously unidentified role for VWM in the acquisition of tolerant representations for object recognition. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Motion cue effects on human pilot dynamics in manual control
NASA Technical Reports Server (NTRS)
Washizu, K.; Tanaka, K.; Endo, S.; Itoko, T.
1977-01-01
Two experiments were conducted to study the motion cue effects on human pilots during tracking tasks. The moving-base simulator of National Aerospace Laboratory was employed as the motion cue device, and the attitude director indicator or the projected visual field was employed as the visual cue device. The chosen controlled elements were second-order unstable systems. It was confirmed that with the aid of motion cues the pilot workload was lessened and consequently the human controllability limits were enlarged. In order to clarify the mechanism of these effects, the describing functions of the human pilots were identified by making use of the spectral and the time domain analyses. The results of these analyses suggest that the sensory system of the motion cues can yield the differential informations of the signal effectively, which coincides with the existing knowledges in the physiological area.
Applications of Phase-Based Motion Processing
NASA Technical Reports Server (NTRS)
Branch, Nicholas A.; Stewart, Eric C.
2018-01-01
Image pyramids provide useful information in determining structural response at low cost using commercially available cameras. The current effort applies previous work on the complex steerable pyramid to analyze and identify imperceptible linear motions in video. Instead of implicitly computing motion spectra through phase analysis of the complex steerable pyramid and magnifying the associated motions, instead present a visual technique and the necessary software to display the phase changes of high frequency signals within video. The present technique quickly identifies regions of largest motion within a video with a single phase visualization and without the artifacts of motion magnification, but requires use of the computationally intensive Fourier transform. While Riesz pyramids present an alternative to the computationally intensive complex steerable pyramid for motion magnification, the Riesz formulation contains significant noise, and motion magnification still presents large amounts of data that cannot be quickly assessed by the human eye. Thus, user-friendly software is presented for quickly identifying structural response through optical flow and phase visualization in both Python and MATLAB.
Foveal analysis and peripheral selection during active visual sampling
Ludwig, Casimir J. H.; Davies, J. Rhys; Eckstein, Miguel P.
2014-01-01
Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently. PMID:24385588
Pulvinar neurons reveal neurobiological evidence of past selection for rapid detection of snakes.
Van Le, Quan; Isbell, Lynne A; Matsumoto, Jumpei; Nguyen, Minh; Hori, Etsuro; Maior, Rafael S; Tomaz, Carlos; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao
2013-11-19
Snakes and their relationships with humans and other primates have attracted broad attention from multiple fields of study, but not, surprisingly, from neuroscience, despite the involvement of the visual system and strong behavioral and physiological evidence that humans and other primates can detect snakes faster than innocuous objects. Here, we report the existence of neurons in the primate medial and dorsolateral pulvinar that respond selectively to visual images of snakes. Compared with three other categories of stimuli (monkey faces, monkey hands, and geometrical shapes), snakes elicited the strongest, fastest responses, and the responses were not reduced by low spatial filtering. These findings integrate neuroscience with evolutionary biology, anthropology, psychology, herpetology, and primatology by identifying a neurobiological basis for primates' heightened visual sensitivity to snakes, and adding a crucial component to the growing evolutionary perspective that snakes have long shaped our primate lineage.
Coding of navigational affordances in the human visual system
Epstein, Russell A.
2017-01-01
A central component of spatial navigation is determining where one can and cannot go in the immediate environment. We used fMRI to test the hypothesis that the human visual system solves this problem by automatically identifying the navigational affordances of the local scene. Multivoxel pattern analyses showed that a scene-selective region of dorsal occipitoparietal cortex, known as the occipital place area, represents pathways for movement in scenes in a manner that is tolerant to variability in other visual features. These effects were found in two experiments: One using tightly controlled artificial environments as stimuli, the other using a diverse set of complex, natural scenes. A reconstruction analysis demonstrated that the population codes of the occipital place area could be used to predict the affordances of novel scenes. Taken together, these results reveal a previously unknown mechanism for perceiving the affordance structure of navigable space. PMID:28416669
Pulvinar neurons reveal neurobiological evidence of past selection for rapid detection of snakes
Van Le, Quan; Isbell, Lynne A.; Matsumoto, Jumpei; Nguyen, Minh; Hori, Etsuro; Maior, Rafael S.; Tomaz, Carlos; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao
2013-01-01
Snakes and their relationships with humans and other primates have attracted broad attention from multiple fields of study, but not, surprisingly, from neuroscience, despite the involvement of the visual system and strong behavioral and physiological evidence that humans and other primates can detect snakes faster than innocuous objects. Here, we report the existence of neurons in the primate medial and dorsolateral pulvinar that respond selectively to visual images of snakes. Compared with three other categories of stimuli (monkey faces, monkey hands, and geometrical shapes), snakes elicited the strongest, fastest responses, and the responses were not reduced by low spatial filtering. These findings integrate neuroscience with evolutionary biology, anthropology, psychology, herpetology, and primatology by identifying a neurobiological basis for primates’ heightened visual sensitivity to snakes, and adding a crucial component to the growing evolutionary perspective that snakes have long shaped our primate lineage. PMID:24167268
The surprisingly high human efficiency at learning to recognize faces
Peterson, Matthew F.; Abbey, Craig K.; Eckstein, Miguel P.
2009-01-01
We investigated the ability of humans to optimize face recognition performance through rapid learning of individual relevant features. We created artificial faces with discriminating visual information heavily concentrated in single features (nose, eyes, chin or mouth). In each of 2500 learning blocks a feature was randomly selected and retained over the course of four trials, during which observers identified randomly sampled, noisy face images. Observers learned the discriminating feature through indirect feedback, leading to large performance gains. Performance was compared to a learning Bayesian ideal observer, resulting in unexpectedly high learning compared to previous studies with simpler stimuli. We explore various explanations and conclude that the higher learning measured with faces cannot be driven by adaptive eye movement strategies but can be mostly accounted for by suboptimalities in human face discrimination when observers are uncertain about the discriminating feature. We show that an initial bias of humans to use specific features to perform the task even though they are informed that each of four features is equally likely to be the discriminatory feature would lead to seemingly supra-optimal learning. We also examine the possibility of inefficient human integration of visual information across the spatially distributed facial features. Together, the results suggest that humans can show large performance improvement effects in discriminating faces as they learn to identify the feature containing the discriminatory information. PMID:19000918
Big data in medical informatics: improving education through visual analytics.
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2014-01-01
A continuous effort to improve healthcare education today is currently driven from the need to create competent health professionals able to meet healthcare demands. Limited research reporting how educational data manipulation can help in healthcare education improvement. The emerging research field of visual analytics has the advantage to combine big data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognise visual patterns. The aim of this study was therefore to explore novel ways of representing curriculum and educational data using visual analytics. Three approaches of visualization and representation of educational data were presented. Five competencies at undergraduate medical program level addressed in courses were identified to inaccurately correspond to higher education board competencies. Different visual representations seem to have a potential in impacting on the ability to perceive entities and connections in the curriculum data.
Is orbital volume associated with eyeball and visual cortex volume in humans?
Pearce, Eiluned; Bridge, Holly
2013-01-01
In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (n = 88) and brain and visual cortex (n = 99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes and (iii) different visual cortical areas, independently of overall brain volume. In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices.
Is orbital volume associated with eyeball and visual cortex volume in humans?
Pearce, Eiluned; Bridge, Holly
2013-01-01
Background In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. Aim To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Subjects & Methods Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (N=88), and brain and visual cortex (N=99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. Results A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes, (iii) different visual cortical areas, independently of overall brain volume. Conclusion In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices. PMID:23879766
Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu
2009-01-01
The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuro-imaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Learning outcomes Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD). PMID:19419735
Redundancy reduction explains the expansion of visual direction space around the cardinal axes.
Perrone, John A; Liston, Dorion B
2015-06-01
Motion direction discrimination in humans is worse for oblique directions than for the cardinal directions (the oblique effect). For some unknown reason, the human visual system makes systematic errors in the estimation of particular motion directions; a direction displacement near a cardinal axis appears larger than it really is whereas the same displacement near an oblique axis appears to be smaller. Although the perceptual effects are robust and are clearly measurable in smooth pursuit eye movements, all attempts to identify the neural underpinnings for the oblique effect have failed. Here we show that a model of image velocity estimation based on the known properties of neurons in primary visual cortex (V1) and the middle temporal (MT) visual area of the primate brain produces the oblique effect. We also provide an explanation for the unusual asymmetric patterns of inhibition that have been found surrounding MT neurons. These patterns are consistent with a mechanism within the visual system that prevents redundant velocity signals from being passed onto the next motion-integration stage, (dorsal Medial superior temporal, MSTd). We show that model redundancy-reduction mechanisms within the MT-MSTd pathway produce the oblique effect. Copyright © 2015 Elsevier Ltd. All rights reserved.
Research on metallic material defect detection based on bionic sensing of human visual properties
NASA Astrophysics Data System (ADS)
Zhang, Pei Jiang; Cheng, Tao
2018-05-01
Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.
Functional correlates of the anterolateral processing hierarchy in human auditory cortex.
Chevillet, Mark; Riesenhuber, Maximilian; Rauschecker, Josef P
2011-06-22
Converging evidence supports the hypothesis that an anterolateral processing pathway mediates sound identification in auditory cortex, analogous to the role of the ventral cortical pathway in visual object recognition. Studies in nonhuman primates have characterized the anterolateral auditory pathway as a processing hierarchy, composed of three anatomically and physiologically distinct initial stages: core, belt, and parabelt. In humans, potential homologs of these regions have been identified anatomically, but reliable and complete functional distinctions between them have yet to be established. Because the anatomical locations of these fields vary across subjects, investigations of potential homologs between monkeys and humans require these fields to be defined in single subjects. Using functional MRI, we presented three classes of sounds (tones, band-passed noise bursts, and conspecific vocalizations), equivalent to those used in previous monkey studies. In each individual subject, three regions showing functional similarities to macaque core, belt, and parabelt were readily identified. Furthermore, the relative sizes and locations of these regions were consistent with those reported in human anatomical studies. Our results demonstrate that the functional organization of the anterolateral processing pathway in humans is largely consistent with that of nonhuman primates. Because our scanning sessions last only 15 min/subject, they can be run in conjunction with other scans. This will enable future studies to characterize functional modules in human auditory cortex at a level of detail previously possible only in visual cortex. Furthermore, the approach of using identical schemes in both humans and monkeys will aid with establishing potential homologies between them.
User-Driven Sampling Strategies in Image Exploitation
Harvey, Neal R.; Porter, Reid B.
2013-12-23
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-drivenmore » sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. We discovered that in user-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. Furthermore, in preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.« less
User-driven sampling strategies in image exploitation
NASA Astrophysics Data System (ADS)
Harvey, Neal; Porter, Reid
2013-12-01
Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.
Kretschmer, Sarah; Pieper, Mario; Hüttmann, Gereon; Bölke, Torsten; Wollenberg, Barbara; Marsh, Leigh M; Garn, Holger; König, Peter
2016-08-01
The basic understanding of inflammatory airway diseases greatly benefits from imaging the cellular dynamics of immune cells. Current imaging approaches focus on labeling specific cells to follow their dynamics but fail to visualize the surrounding tissue. To overcome this problem, we evaluated autofluorescence multiphoton microscopy for following the motion and interaction of cells in the airways in the context of tissue morphology. Freshly isolated murine tracheae from healthy mice and mice with experimental allergic airway inflammation were examined by autofluorescence multiphoton microscopy. In addition, fluorescently labeled ovalbumin and fluorophore-labeled antibodies were applied to visualize antigen uptake and to identify specific cell populations, respectively. The trachea in living mice was imaged to verify that the ex vivo preparation reflects the in vivo situation. Autofluorescence multiphoton microscopy was also tested to examine human tissue from patients in short-term tissue culture. Using autofluorescence, the epithelium, underlying cells, and fibers of the connective tissue, as well as blood vessels, were identified in isolated tracheae. Similar structures were visualized in living mice and in the human airway tissue. In explanted murine airways, mobile cells were localized within the tissue and we could follow their migration, interactions between individual cells, and their phagocytic activity. During allergic airway inflammation, increased number of eosinophil and neutrophil granulocytes were detected that moved within the connective tissue and immediately below the epithelium without damaging the epithelial cells or connective tissues. Contacts between granulocytes were transient lasting 3 min on average. Unexpectedly, prolonged interactions between granulocytes and antigen-uptaking cells were observed lasting for an average of 13 min. Our results indicate that autofluorescence-based imaging can detect previously unknown immune cell interactions in the airways. The method also holds the potential to be used during diagnostic procedures in humans if integrated into a bronchoscope.
Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.
Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli
2018-06-08
Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exploration of complex visual feature spaces for object perception
Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.
2014-01-01
The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408
Face Processing: Models For Recognition
NASA Astrophysics Data System (ADS)
Turk, Matthew A.; Pentland, Alexander P.
1990-03-01
The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
Up Periscope! Designing a New Perceptual Metric for Imaging System Performance
NASA Technical Reports Server (NTRS)
Watson, Andrew B.
2016-01-01
Modern electronic imaging systems include optics, sensors, sampling, noise, processing, compression, transmission and display elements, and are viewed by the human eye. Many of these elements cannot be assessed by traditional imaging system metrics such as the MTF. More complex metrics such as NVTherm do address these elements, but do so largely through parametric adjustment of an MTF-like metric. The parameters are adjusted through subjective testing of human observers identifying specific targets in a set of standard images. We have designed a new metric that is based on a model of human visual pattern classification. In contrast to previous metrics, ours simulates the human observer identifying the standard targets. One application of this metric is to quantify performance of modern electronic periscope systems on submarines.
[Research of joint-robotics-based design of biomechanics testing device on human spine].
Deng, Guoyong; Tian, Lianfang; Mao, Zongyuan
2009-12-01
This paper introduces the hardware and software of a biomechanical robot-based testing device. The bottom control orders, posture and torque data transmission, and the control algorithms are integrated in a unified visual control platform by Visual C+ +, with easy control and management. By using hybrid force-displacement control method to load the human spine, we can test the organizational structure and the force state of the FSU (Functional spinal unit) well, which overcomes the shortcomings due to the separation of the force and displacement measurement, thus greatly improves the measurement accuracy. Also it is esay to identify the spinal degeneration and the load-bearing impact on the organizational structure of the FSU after various types of surgery.
NASA Technical Reports Server (NTRS)
Riccio, Gary E.; McDonald, P. Vernon; Bloomberg, Jacob
1999-01-01
Our theoretical and empirical research on the whole-body coordination during locomotion led to a Phase 1 SBIR grant from NASA JSC. The purpose of the SBIR grant was to design an innovative system for evaluating eye-head-trunk coordination during whole-body perturbations that are characteristic of locomotion. The approach we used to satisfy the Phase 1 objectives was based on a structured methodology for the development of human-systems technology. Accordingly the project was broken down into a number of tasks and subtasks. In sequence, the major tasks were: (1) identify needs for functional assessment of visual acuity under conditions involving whole-body perturbation within the NASA Space Medical Monitoring and Countermeasures (SMMaC) program and in other related markets; (2) analyze the needs into the causes and symptoms of impaired visual acuity under conditions involving whole-body perturbation; (3) translate the analyzed needs into technology requirements for the Functional Visual Assessment Test (FVAT); (4) identify candidate technology solutions and implementations of FVAT; and (5) prioritize and select technology solutions. The work conducted in these tasks is described in this final volume of the series on Multimodal Perception and Multicriterion Control of Nested Systems. While prior volumes (1 and 2) in the series focus on theoretical foundations and novel data-analytic techniques, this volume addresses technology that is necessary for minimally intrusive data collection and near-real-time data analysis and display.
Determing the feasiblity of chemical imaging of cotton trash
USDA-ARS?s Scientific Manuscript database
There is some interest in the textile community about the identity of cotton trash that has become comingled with cotton lint. Currently, trash is identified visually by human “classers” and instrumentally by the Advanced Fiber Information System (AFIS) and the High Volume Instrument (HVI). Although...
Zebra Stripes through the Eyes of Their Predators, Zebras, and Humans.
Melin, Amanda D; Kline, Donald W; Hiramatsu, Chihiro; Caro, Tim
2016-01-01
The century-old idea that stripes make zebras cryptic to large carnivores has never been examined systematically. We evaluated this hypothesis by passing digital images of zebras through species-specific spatial and colour filters to simulate their appearance for the visual systems of zebras' primary predators and zebras themselves. We also measured stripe widths and luminance contrast to estimate the maximum distances from which lions, spotted hyaenas, and zebras can resolve stripes. We found that beyond ca. 50 m (daylight) and 30 m (twilight) zebra stripes are difficult for the estimated visual systems of large carnivores to resolve, but not humans. On moonless nights, stripes are difficult for all species to resolve beyond ca. 9 m. In open treeless habitats where zebras spend most time, zebras are as clearly identified by the lion visual system as are similar-sized ungulates, suggesting that stripes cannot confer crypsis by disrupting the zebra's outline. Stripes confer a minor advantage over solid pelage in masking body shape in woodlands, but the effect is stronger for humans than for predators. Zebras appear to be less able than humans to resolve stripes although they are better than their chief predators. In conclusion, compared to the uniform pelage of other sympatric herbivores it appears highly unlikely that stripes are a form of anti-predator camouflage.
Explicit Encoding of Multimodal Percepts by Single Neurons in the Human Brain
Quiroga, Rodrigo Quian; Kraskov, Alexander; Koch, Christof; Fried, Itzhak
2010-01-01
Summary Different pictures of Marilyn Monroe can evoke the same percept, even if greatly modified as in Andy Warhol’s famous portraits. But how does the brain recognize highly variable pictures as the same percept? Various studies have provided insights into how visual information is processed along the “ventral pathway,” via both single-cell recordings in monkeys [1, 2] and functional imaging in humans [3, 4]. Interestingly, in humans, the same “concept” of Marilyn Monroe can be evoked with other stimulus modalities, for instance by hearing or reading her name. Brain imaging studies have identified cortical areas selective to voices [5, 6] and visual word forms [7, 8]. However, how visual, text, and sound information can elicit a unique percept is still largely unknown. By using presentations of pictures and of spoken and written names, we show that (1) single neurons in the human medial temporal lobe (MTL) respond selectively to representations of the same individual across different sensory modalities; (2) the degree of multimodal invariance increases along the hierarchical structure within the MTL; and (3) such neuronal representations can be generated within less than a day or two. These results demonstrate that single neurons can encode percepts in an explicit, selective, and invariant manner, even if evoked by different sensory modalities. PMID:19631538
Multi-sensory landscape assessment: the contribution of acoustic perception to landscape evaluation.
Gan, Yonghong; Luo, Tao; Breitung, Werner; Kang, Jian; Zhang, Tianhai
2014-12-01
In this paper, the contribution of visual and acoustic preference to multi-sensory landscape evaluation was quantitatively compared. The real landscapes were treated as dual-sensory ambiance and separated into visual landscape and soundscape. Both were evaluated by 63 respondents in laboratory conditions. The analysis of the relationship between respondent's visual and acoustic preference as well as their respective contribution to landscape preference showed that (1) some common attributes are universally identified in assessing visual, aural and audio-visual preference, such as naturalness or degree of human disturbance; (2) with acoustic and visual preferences as variables, a multi-variate linear regression model can satisfactorily predict landscape preference (R(2 )= 0.740), while the coefficients of determination for a unitary linear regression model were 0.345 and 0.720 for visual and acoustic preference as predicting factors, respectively; (3) acoustic preference played a much more important role in landscape evaluation than visual preference in this study (the former is about 4.5 times of the latter), which strongly suggests a rethinking of the role of soundscape in environment perception research and landscape planning practice.
Colhoun, Andrew F; Speich, John E; Cooley, Lauren F; Bell, Eugene D; Barbee, R Wayne; Guruli, Georgi; Ratz, Paul H; Klausner, Adam P
2017-08-01
Low amplitude rhythmic contractions (LARC) occur in detrusor smooth muscle and may play a role in storage disorders such as overactive bladder and detrusor overactivity. The purpose of this study was to determine whether LARC frequencies identified in vitro from strips of human urinary bladder tissue correlate with in vivo LARC frequencies, visualized as phasic intravesical pressure (p ves ) waves during urodynamics (UD). After IRB approval, fresh strips of human urinary bladder were obtained from patients. LARC was recorded with tissue strips at low tension (<2 g) and analyzed by fast Fourier transform (FFT) to identify LARC signal frequencies. Blinded UD tracings were retrospectively reviewed for signs of LARC on the p ves tracing during filling and were analyzed via FFT. Distinct LARC frequencies were identified in 100% of tissue strips (n = 9) obtained with a mean frequency of 1.97 ± 0.47 cycles/min (33 ± 8 mHz). Out of 100 consecutive UD studies reviewed, 35 visually displayed phasic p ves waves. In 12/35 (34%), real p ves signals were present that were independent of abdominal activity. Average UD LARC frequency was 2.34 ± 0.36 cycles/min (39 ± 6 mHz) which was similar to tissue LARC frequencies (p = 0.50). A majority (83%) of the UD cohort with LARC signals also demonstrated detrusor overactivity. During UD, a subset of patients displayed phasic p ves waves with a distinct rhythmic frequency similar to the in vitro LARC frequency quantified in human urinary bladder tissue strips. Further refinements of this technique may help identify subsets of individuals with LARC-mediated storage disorders.
Mundinano, Inaki-Carril; Chen, Juan; de Souza, Mitchell; Sarossy, Marc G; Joanisse, Marc F; Goodale, Melvyn A; Bourne, James A
2017-11-13
Injury to the primary visual cortex (V1, striate cortex) and the geniculostriate pathway in adults results in cortical blindness, abolishing conscious visual perception. Early studies by Larry Weiskrantz and colleagues demonstrated that some patients with an occipital-lobe injury exhibited a degree of unconscious vision and visually-guided behaviour within the blind field. A more recent focus has been the observed phenomenon whereby early-life injury to V1 often results in the preservation of visual perception in both monkeys and humans. These findings initiated a concerted effort on multiple fronts, including nonhuman primate studies, to uncover the neural substrate/s of the spared conscious vision. In both adult and early-life cases of V1 injury, evidence suggests the involvement of the Middle Temporal area (MT) of the extrastriate visual cortex, which is an integral component area of the dorsal stream and is also associated with visually-guided behaviors. Because of the limited number of early-life V1 injury cases for humans, the outstanding question in the field is what secondary visual pathways are responsible for this extraordinary capacity? Here we report for the first time a case of a child (B.I.) who suffered a bilateral occipital-lobe injury in the first two weeks postnatally due to medium-chain acyl-Co-A dehydrogenase deficiency. At 6 years of age, B.I. underwent a battery of neurophysiological tests, as well as structural and diffusion MRI and ophthalmic examination at 7 years. Despite the extensive bilateral occipital cortical damage, B.I. has extensive conscious visual abilities, is not blind, and can use vision to navigate his environment. Furthermore, unlike blindsight patients, he can readily and consciously identify happy and neutral faces and colors, tasks associated with ventral stream processing. These findings suggest significant re-routing of visual information. To identify the putative visual pathway/s responsible for this ability, MRI tractography of secondary visual pathways connecting MT with the lateral geniculate nucleus (LGN) and the inferior pulvinar (PI) were analysed. Results revealed an increased PI-MT pathway in the left hemisphere, suggesting that this pulvinar relay could be the neural pathway affording the preserved visual capacity following an early-life lesion of V1. These findings corroborate anatomical evidence from monkeys showing an enhanced PI-MT pathway following an early-life lesion of V1, compared to adults. Copyright © 2017 Elsevier Ltd. All rights reserved.
GoIFISH: a system for the quantification of single cell heterogeneity from IFISH images.
Trinh, Anne; Rye, Inga H; Almendro, Vanessa; Helland, Aslaug; Russnes, Hege G; Markowetz, Florian
2014-08-26
Molecular analysis has revealed extensive intra-tumor heterogeneity in human cancer samples, but cannot identify cell-to-cell variations within the tissue microenvironment. In contrast, in situ analysis can identify genetic aberrations in phenotypically defined cell subpopulations while preserving tissue-context specificity. GoIFISHGoIFISH is a widely applicable, user-friendly system tailored for the objective and semi-automated visualization, detection and quantification of genomic alterations and protein expression obtained from fluorescence in situ analysis. In a sample set of HER2-positive breast cancers GoIFISHGoIFISH is highly robust in visual analysis and its accuracy compares favorably to other leading image analysis methods. GoIFISHGoIFISH is freely available at www.sourceforge.net/projects/goifish/.
Karmonik, Christof; Fung, Steve H; Dulay, M; Verma, A; Grossman, Robert G
2013-01-01
Graph-theoretical analysis algorithms have been used for identifying subnetworks in the human brain during the Default Mode State. Here, these methods are expanded to determine the interaction of the sensory and the motor subnetworks during the performance of an approach-avoidance paradigm utilizing the correlation strength between the signal intensity time courses as measure of synchrony. From functional magnetic resonance imaging (fMRI) data of 9 healthy volunteers, two signal time courses, one from the primary visual cortex (sensory input) and one from the motor cortex (motor output) were identified and a correlation difference map was calculated. Graph networks were created from this map and visualized with spring-embedded layouts and 3D layouts in the original anatomical space. Functional clusters in these networks were identified with the MCODE clustering algorithm. Interactions between the sensory sub-network and the motor sub-network were quantified through the interaction strengths of these clusters. The percentages of interactions involving the visual cortex ranged from 85 % to 18 % and the motor cortex ranged from 40 % to 9 %. Other regions with high interactions were: frontal cortex (19 ± 18 %), insula (17 ± 22 %), cuneus (16 ± 15 %), supplementary motor area (SMA, 11 ± 18 %) and subcortical regions (11 ± 10 %). Interactions between motor cortex, SMA and visual cortex accounted for 12 %, between visual cortex and cuneus for 8 % and between motor cortex, SMA and cuneus for 6 % of all interactions. These quantitative findings are supported by the visual impressions from the 2D and 3D network layouts.
Haptic perception and body representation in lateral and medial occipito-temporal cortices.
Costantini, Marcello; Urgesi, Cosimo; Galati, Gaspare; Romani, Gian Luca; Aglioti, Salvatore M
2011-04-01
Although vision is the primary sensory modality that humans and other primates use to identify objects in the environment, we can recognize crucial object features (e.g., shape, size) using the somatic modality. Previous studies have shown that the occipito-temporal areas dedicated to the visual processing of object forms, faces and bodies also show category-selective responses when the preferred stimuli are haptically explored out of view. Visual processing of human bodies engages specific areas in lateral (extrastriate body area, EBA) and medial (fusiform body area, FBA) occipito-temporal cortex. This study aimed at exploring the relative involvement of EBA and FBA in the haptic exploration of body parts. During fMRI scanning, participants were asked to haptically explore either real-size fake body parts or objects. We found a selective activation of right and left EBA, but not of right FBA, while participants haptically explored body parts as compared to real objects. This suggests that EBA may integrate visual body representations with somatosensory information regarding body parts and form a multimodal representation of the body. Furthermore, both left and right EBA showed a comparable level of body selectivity during haptic perception and visual imagery. However, right but not left EBA was more activated during haptic exploration than visual imagery of body parts, ruling out that the response to haptic body exploration was entirely due to the use of visual imagery. Overall, the results point to the existence of different multimodal body representations in the occipito-temporal cortex which are activated during perception and imagery of human body parts. Copyright © 2011 Elsevier Ltd. All rights reserved.
O'Connell, Caitlin; Ho, Leon C; Murphy, Matthew C; Conner, Ian P; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C
2016-11-09
Human visual performance has been observed to show superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine whether the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI, respectively, in 15 healthy individuals at 3 T. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In diffusion kurtosis MRI, the brain regions mapping to the lower visual field showed higher mean kurtosis, but not fractional anisotropy or mean diffusivity compared with the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing.
Contemporary Issues in Cognitive Psychology: The Loyola Symposium.
ERIC Educational Resources Information Center
Solso, Robert L. , Ed.
Contributions in the first section of this volume are: "Learning to Identify Toy Block Structures" by Patrick Winston; "Beyond the Yellow-Volkswagen Detector and the Grandmother Cell: A General Strategy for the Exploration of Operations in Human Pattern Recognition" by Naomi Weisstein; "Visual Recognition in a Theory of Information Processing" by…
Jung, Kwang Bo; Lee, Hana; Son, Ye Seul; Lee, Ji Hye; Cho, Hyun-Soo; Lee, Mi-Ok; Oh, Jung-Hwa; Lee, Jaemin; Kim, Seokho; Jung, Cho-Rok; Kim, Janghwan; Son, Mi-Young
2018-01-01
Human intestinal organoids (hIOs) derived from human pluripotent stem cells (hPSCs) have immense potential as a source of intestines. Therefore, an efficient system is needed for visualizing the stage of intestinal differentiation and further identifying hIOs derived from hPSCs. Here, 2 fluorescent biosensors were developed based on human induced pluripotent stem cell (hiPSC) lines that stably expressed fluorescent reporters driven by intestine-specific gene promoters Krüppel-like factor 5 monomeric Cherry (KLF5 mCherry ) and intestine-specific homeobox enhanced green fluorescence protein (ISX eGFP ). Then hIOs were efficiently induced from those transgenic hiPSC lines in which mCherry- or eGFP-expressing cells, which appeared during differentiation, could be identified in intact living cells in real time. Reporter gene expression had no adverse effects on differentiation into hIOs and proliferation. Using our reporter system to screen for hIO differentiation factors, we identified DMH1 as an efficient substitute for Noggin. Transplanted hIOs under the kidney capsule were tracked with fluorescence imaging (FLI) and confirmed histologically. After orthotopic transplantation, the localization of the hIOs in the small intestine could be accurately visualized using FLI. Our study establishes a selective system for monitoring the in vitro differentiation and for tracking the in vivo localization of hIOs and contributes to further improvement of cell-based therapies and preclinical screenings in the intestinal field.-Jung, K. B., Lee, H., Son, Y. S., Lee, J. H., Cho, H.-S., Lee, M.-O., Oh, J.-H., Lee, J., Kim, S., Jung, C.-R., Kim, J., Son, M.-Y. In vitro and in vivo imaging and tracking of intestinal organoids from human induced pluripotent stem cells. © FASEB.
Spatiotemporal data visualisation for homecare monitoring of elderly people.
Juarez, Jose M; Ochotorena, Jose M; Campos, Manuel; Combi, Carlo
2015-10-01
Elderly people who live alone can be assisted by home monitoring systems that identify risk scenarios such as falls, fatigue symptoms or burglary. Given that these systems have to manage spatiotemporal data, human intervention is required to validate automatic alarms due to the high number of false positives and the need for context interpretation. The goal of this work was to provide tools to support human action, to identify such potential risk scenarios based on spatiotemporal data visualisation. We propose the MTA (multiple temporal axes) model, a visual representation of temporal information of the activity of a single person at different locations. The main goal of this model is to visualize the behaviour of a person in their home, facilitating the identification of health-risk scenarios and repetitive patterns. We evaluate the model's insight capacity compared with other models using a standard evaluation protocol. We also test its practical suitability of the MTA graphical model in a commercial home monitoring system. In particular, we implemented 8VISU, a visualization tool based on MTA. MTA proved to be more than 90% accurate in identify non-risk scenarios, independently of the length of the record visualised. When the spatial complexity was increased (e.g. number of rooms) the model provided good accuracy form up to 5 rooms. Therefore, user preferences and user performance seem to be balanced. Moreover, it also gave high sensitivity levels (over 90%) for 5-8 rooms. Fall is the most recurrent incident for elderly people. The MTA model outperformed the other models considered in identifying fall scenarios (66% of correctness) and was the second best for burglary and fatigue scenarios (36% of correctness). Our experiments also confirm the hypothesis that cyclic models are the most suitable for fatigue scenarios, the Spiral and MTA models obtaining most positive identifications. In home monitoring systems, spatiotemporal visualization is a useful tool for identifying risk and preventing home accidents in elderly people living alone. The MTA model helps the visualisation in different stages of the temporal data analysis process. In particular, its explicit representation of space and movement is useful for identifying potential scenarios of risk, while the spiral structure can be used for the identification of recurrent patterns. The results of the experiments and the experience using the visualization tool 8VISU proof the potential of the MTA graphical model to mine temporal data and to support caregivers using home monitoring infrastructures. Copyright © 2015 Elsevier B.V. All rights reserved.
Space-by-time manifold representation of dynamic facial expressions for emotion categorization
Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.
2016-01-01
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521
Applications of neural networks to landmark detection in 3-D surface data
NASA Astrophysics Data System (ADS)
Arndt, Craig M.
1992-09-01
The problem of identifying key landmarks in 3-dimensional surface data is of considerable interest in solving a number of difficult real-world tasks, including object recognition and image processing. The specific problem that we address in this research is to identify the specific landmarks (anatomical) in human surface data. This is a complex task, currently performed visually by an expert human operator. In order to replace these human operators and increase reliability of the data acquisition, we need to develop a computer algorithm which will utilize the interrelations between the 3-dimensional data to identify the landmarks of interest. The current presentation describes a method for designing, implementing, training, and testing a custom architecture neural network which will perform the landmark identification task. We discuss the performance of the net in relationship to human performance on the same task and how this net has been integrated with other AI and traditional programming methods to produce a powerful analysis tool for computer anthropometry.
Contributions of visual and embodied expertise to body perception.
Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D
2012-01-01
Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.
Li, Jieyue; Newberg, Justin Y; Uhlén, Mathias; Lundberg, Emma; Murphy, Robert F
2012-01-01
The Human Protein Atlas contains immunofluorescence images showing subcellular locations for thousands of proteins. These are currently annotated by visual inspection. In this paper, we describe automated approaches to analyze the images and their use to improve annotation. We began by training classifiers to recognize the annotated patterns. By ranking proteins according to the confidence of the classifier, we generated a list of proteins that were strong candidates for reexamination. In parallel, we applied hierarchical clustering to group proteins and identified proteins whose annotations were inconsistent with the remainder of the proteins in their cluster. These proteins were reexamined by the original annotators, and a significant fraction had their annotations changed. The results demonstrate that automated approaches can provide an important complement to visual annotation.
Visualization and classification of physiological failure modes in ensemble hemorrhage simulation
NASA Astrophysics Data System (ADS)
Zhang, Song; Pruett, William Andrew; Hester, Robert
2015-01-01
In an emergency situation such as hemorrhage, doctors need to predict which patients need immediate treatment and care. This task is difficult because of the diverse response to hemorrhage in human population. Ensemble physiological simulations provide a means to sample a diverse range of subjects and may have a better chance of containing the correct solution. However, to reveal the patterns and trends from the ensemble simulation is a challenging task. We have developed a visualization framework for ensemble physiological simulations. The visualization helps users identify trends among ensemble members, classify ensemble member into subpopulations for analysis, and provide prediction to future events by matching a new patient's data to existing ensembles. We demonstrated the effectiveness of the visualization on simulated physiological data. The lessons learned here can be applied to clinically-collected physiological data in the future.
Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan
2015-01-16
We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.
Schaeffel, Frank; Simon, Perikles; Feldkaemper, Marita; Ohngemach, Sibylle; Williams, Robert W
2003-09-01
Experiments in animal models of myopia have emphasised the importance of visual input in emmetropisation but it is also evident that the development of human myopia is influenced to some degree by genetic factors. Molecular genetic approaches can help to identify both the genes involved in the control of ocular development and the potential targets for pharmacological intervention. This review covers a variety of techniques that are being used to study the molecular biology of myopia. In the first part, we describe techniques used to analyse visually induced changes in gene expression: Northern Blot, polymerase chain reaction (PCR) and real-time PCR to obtain semi-quantitative and quantitative measures of changes in transcription level of a known gene, differential display reverse transcription PCR (DD-RT-PCR) to search for new genes that are controlled by visual input, rapid amplification of 5' cDNA (5'-RACE) to extend the 5' end of sequences that are regulated by visual input, in situ hybridisation to localise the expression of a given gene in a tissue and oligonucleotide microarray assays to simultaneously test visually induced changes in thousands of transcripts in single experiments. In the second part, we describe techniques that are used to localise regions in the genome that contain genes that are involved in the control of eye growth and refractive errors in mice and humans. These include quantitative trait loci (QTL) mapping, exploiting experimental test crosses of mice and transmission disequilibrium tests (TDT) in humans to find chromosomal intervals that harbour genes involved in myopia development. We review several successful applications of this battery of techniques in myopia research.
O’Connell, Caitlin; Ho, Leon C.; Murphy, Matthew C.; Conner, Ian P.; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C.
2016-01-01
Human visual performance has been observed to exhibit superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine if the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI (DKI), respectively in 15 healthy individuals at 3 Tesla. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In DKI, the brain regions mapping to the lower visual field exhibited higher mean kurtosis but not fractional anisotropy or mean diffusivity when compared to the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing. PMID:27631541
Trends in HFE Methods and Tools and Their Applicability to Safety Reviews
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Hara, J.M.; Plott, C.; Milanski, J.
2009-09-30
The U.S. Nuclear Regulatory Commission's (NRC) conducts human factors engineering (HFE) safety reviews of applicant submittals for new plants and for changes to existing plants. The reviews include the evaluation of the methods and tools (M&T) used by applicants as part of their HFE program. The technology used to perform HFE activities has been rapidly evolving, resulting in a whole new generation of HFE M&Ts. The objectives of this research were to identify the current trends in HFE methods and tools, determine their applicability to NRC safety reviews, and identify topics for which the NRC may need additional guidance tomore » support the NRC's safety reviews. We conducted a survey that identified over 100 new HFE M&Ts. The M&Ts were assessed to identify general trends. Seven trends were identified: Computer Applications for Performing Traditional Analyses, Computer-Aided Design, Integration of HFE Methods and Tools, Rapid Development Engineering, Analysis of Cognitive Tasks, Use of Virtual Environments and Visualizations, and Application of Human Performance Models. We assessed each trend to determine its applicability to the NRC's review by considering (1) whether the nuclear industry is making use of M&Ts for each trend, and (2) whether M&Ts reflecting the trend can be reviewed using the current design review guidance. We concluded that M&T trends that are applicable to the commercial nuclear industry and are expected to impact safety reviews may be considered for review guidance development. Three trends fell into this category: Analysis of Cognitive Tasks, Use of Virtual Environments and Visualizations, and Application of Human Performance Models. The other trends do not need to be addressed at this time.« less
How cortical neurons help us see: visual recognition in the human brain
Blumberg, Julie; Kreiman, Gabriel
2010-01-01
Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161
Cha, Jaepyeong; Broch, Aline; Mudge, Scott; Kim, Kihoon; Namgoong, Jung-Man; Oh, Eugene; Kim, Peter
2018-01-01
Accurate, real-time identification and display of critical anatomic structures, such as the nerve and vasculature structures, are critical for reducing complications and improving surgical outcomes. Human vision is frequently limited in clearly distinguishing and contrasting these structures. We present a novel imaging system, which enables noninvasive visualization of critical anatomic structures during surgical dissection. Peripheral nerves are visualized by a snapshot polarimetry that calculates the anisotropic optical properties. Vascular structures, both venous and arterial, are identified and monitored in real-time using a near-infrared laser-speckle-contrast imaging. We evaluate the system by performing in vivo animal studies with qualitative comparison by contrast-agent-aided fluorescence imaging. PMID:29541506
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-01-01
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108
Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude
2016-06-10
The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.
Maher, Geoffrey J.; McGowan, Simon J.; Giannoulatou, Eleni; Verrill, Clare; Goriely, Anne; Wilkie, Andrew O. M.
2016-01-01
De novo point mutations arise predominantly in the male germline and increase in frequency with age, but it has not previously been possible to locate specific, identifiable mutations directly within the seminiferous tubules of human testes. Using microdissection of tubules exhibiting altered expression of the spermatogonial markers MAGEA4, FGFR3, and phospho-AKT, whole genome amplification, and DNA sequencing, we establish an in situ strategy for discovery and analysis of pathogenic de novo mutations. In 14 testes from men aged 39–90 y, we identified 11 distinct gain-of-function mutations in five genes (fibroblast growth factor receptors FGFR2 and FGFR3, tyrosine phosphatase PTPN11, and RAS oncogene homologs HRAS and KRAS) from 16 of 22 tubules analyzed; all mutations have known associations with severe diseases, ranging from congenital or perinatal lethal disorders to somatically acquired cancers. These results support proposed selfish selection of spermatogonial mutations affecting growth factor receptor-RAS signaling, highlight its prevalence in older men, and enable direct visualization of the microscopic anatomy of elongated mutant clones. PMID:26858415
Maher, Geoffrey J; McGowan, Simon J; Giannoulatou, Eleni; Verrill, Clare; Goriely, Anne; Wilkie, Andrew O M
2016-03-01
De novo point mutations arise predominantly in the male germline and increase in frequency with age, but it has not previously been possible to locate specific, identifiable mutations directly within the seminiferous tubules of human testes. Using microdissection of tubules exhibiting altered expression of the spermatogonial markers MAGEA4, FGFR3, and phospho-AKT, whole genome amplification, and DNA sequencing, we establish an in situ strategy for discovery and analysis of pathogenic de novo mutations. In 14 testes from men aged 39-90 y, we identified 11 distinct gain-of-function mutations in five genes (fibroblast growth factor receptors FGFR2 and FGFR3, tyrosine phosphatase PTPN11, and RAS oncogene homologs HRAS and KRAS) from 16 of 22 tubules analyzed; all mutations have known associations with severe diseases, ranging from congenital or perinatal lethal disorders to somatically acquired cancers. These results support proposed selfish selection of spermatogonial mutations affecting growth factor receptor-RAS signaling, highlight its prevalence in older men, and enable direct visualization of the microscopic anatomy of elongated mutant clones.
Zebra Stripes through the Eyes of Their Predators, Zebras, and Humans
Melin, Amanda D.; Kline, Donald W.; Hiramatsu, Chihiro; Caro, Tim
2016-01-01
The century-old idea that stripes make zebras cryptic to large carnivores has never been examined systematically. We evaluated this hypothesis by passing digital images of zebras through species-specific spatial and colour filters to simulate their appearance for the visual systems of zebras’ primary predators and zebras themselves. We also measured stripe widths and luminance contrast to estimate the maximum distances from which lions, spotted hyaenas, and zebras can resolve stripes. We found that beyond ca. 50 m (daylight) and 30 m (twilight) zebra stripes are difficult for the estimated visual systems of large carnivores to resolve, but not humans. On moonless nights, stripes are difficult for all species to resolve beyond ca. 9 m. In open treeless habitats where zebras spend most time, zebras are as clearly identified by the lion visual system as are similar-sized ungulates, suggesting that stripes cannot confer crypsis by disrupting the zebra’s outline. Stripes confer a minor advantage over solid pelage in masking body shape in woodlands, but the effect is stronger for humans than for predators. Zebras appear to be less able than humans to resolve stripes although they are better than their chief predators. In conclusion, compared to the uniform pelage of other sympatric herbivores it appears highly unlikely that stripes are a form of anti-predator camouflage. PMID:26799935
Experience, Context, and the Visual Perception of Human Movement
ERIC Educational Resources Information Center
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
2004-01-01
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
Cicmil, Nela; Krug, Kristine
2015-01-01
Vision research has the potential to reveal fundamental mechanisms underlying sensory experience. Causal experimental approaches, such as electrical microstimulation, provide a unique opportunity to test the direct contributions of visual cortical neurons to perception and behaviour. But in spite of their importance, causal methods constitute a minority of the experiments used to investigate the visual cortex to date. We reconsider the function and organization of visual cortex according to results obtained from stimulation techniques, with a special emphasis on electrical stimulation of small groups of cells in awake subjects who can report their visual experience. We compare findings from humans and monkeys, striate and extrastriate cortex, and superficial versus deep cortical layers, and identify a number of revealing gaps in the ‘causal map′ of visual cortex. Integrating results from different methods and species, we provide a critical overview of the ways in which causal approaches have been used to further our understanding of circuitry, plasticity and information integration in visual cortex. Electrical stimulation not only elucidates the contributions of different visual areas to perception, but also contributes to our understanding of neuronal mechanisms underlying memory, attention and decision-making. PMID:26240421
Mapping visual cortex in monkeys and humans using surface-based atlases
NASA Technical Reports Server (NTRS)
Van Essen, D. C.; Lewis, J. W.; Drury, H. A.; Hadjikhani, N.; Tootell, R. B.; Bakircioglu, M.; Miller, M. I.
2001-01-01
We have used surface-based atlases of the cerebral cortex to analyze the functional organization of visual cortex in humans and macaque monkeys. The macaque atlas contains multiple partitioning schemes for visual cortex, including a probabilistic atlas of visual areas derived from a recent architectonic study, plus summary schemes that reflect a combination of physiological and anatomical evidence. The human atlas includes a probabilistic map of eight topographically organized visual areas recently mapped using functional MRI. To facilitate comparisons between species, we used surface-based warping to bring functional and geographic landmarks on the macaque map into register with corresponding landmarks on the human map. The results suggest that extrastriate visual cortex outside the known topographically organized areas is dramatically expanded in human compared to macaque cortex, particularly in the parietal lobe.
Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment
NASA Technical Reports Server (NTRS)
Frische, F.; Osterloh, J.-P.; Luedtke, A.
2011-01-01
This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
2012-02-01
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Michalareas, Georgios; Vezoli, Julien; van Pelt, Stan; Schoffelen, Jan-Mathijs; Kennedy, Henry; Fries, Pascal
2016-01-01
Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral and dorsal stream visual areas are differentially affected by inter-areal influences in the alpha-beta band. PMID:26777277
Stereo study as an aid to visual analysis of ERTS and Skylab images
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The parallax on ERTS and Skylab images is sufficiently large for exploitation by human photointerpreters. The ability to view the imagery stereoscopically reduces the signal-to-noise ratio. Stereoscopic examination of orbital data can contribute to studies of spatial, spectral, and temporal variations on the imagery. The combination of true stereo parallax, plus shadow parallax offer many possibilities to human interpreters for making meaningful analyses of orbital imagery.
The Visible Human Project: From Body to Bits.
Ackerman, Michael J
2017-01-01
Atlases of anatomy have long been a mainstay for visualizing and identifying features of the human body [1]. Many are constructed of idealized illustrations rendered so that structures are presented as three-dimensional (3-D) pictures. Others have employed photographs of actual dissections. Still others are composed of collections of artist renderings of organs or areas of interest. All rely on a basically two-dimensional (2-D) graphic display to depict and allow for a better understanding of a complicated 3-D structure.
Toward a digital camera to rival the human eye
NASA Astrophysics Data System (ADS)
Skorka, Orit; Joseph, Dileepan
2011-07-01
All things considered, electronic imaging systems do not rival the human visual system despite notable progress over 40 years since the invention of the CCD. This work presents a method that allows design engineers to evaluate the performance gap between a digital camera and the human eye. The method identifies limiting factors of the electronic systems by benchmarking against the human system. It considers power consumption, visual field, spatial resolution, temporal resolution, and properties related to signal and noise power. A figure of merit is defined as the performance gap of the weakest parameter. Experimental work done with observers and cadavers is reviewed to assess the parameters of the human eye, and assessment techniques are also covered for digital cameras. The method is applied to 24 modern image sensors of various types, where an ideal lens is assumed to complete a digital camera. Results indicate that dynamic range and dark limit are the most limiting factors. The substantial functional gap, from 1.6 to 4.5 orders of magnitude, between the human eye and digital cameras may arise from architectural differences between the human retina, arranged in a multiple-layer structure, and image sensors, mostly fabricated in planar technologies. Functionality of image sensors may be significantly improved by exploiting technologies that allow vertical stacking of active tiers.
Ortiz, Tomás; Poch, Joaquín; Santos, Juan M.; Requena, Carmen; Martínez, Ana M.; Ortiz-Terán, Laura; Turrero, Agustín; Barcia, Juan; Nogales, Ramón; Calvo, Agustín; Martínez, José M.; Córdoba, José L.; Pascual-Leone, Alvaro
2011-01-01
Over three months of intensive training with a tactile stimulation device, 18 blind and 10 blindfolded seeing subjects improved in their ability to identify geometric figures by touch. Seven blind subjects spontaneously reported ‘visual qualia’, the subjective sensation of seeing flashes of light congruent with tactile stimuli. In the latter subjects tactile stimulation evoked activation of occipital cortex on electroencephalography (EEG). None of the blind subjects who failed to experience visual qualia, despite identical tactile stimulation training, showed EEG recruitment of occipital cortex. None of the blindfolded seeing humans reported visual-like sensations during tactile stimulation. These findings support the notion that the conscious experience of seeing is linked to the activation of occipital brain regions in people with blindness. Moreover, the findings indicate that provision of visual information can be achieved through non-visual sensory modalities which may help to minimize the disability of blind individuals, affording them some degree of object recognition and navigation aid. PMID:21853098
NASA Astrophysics Data System (ADS)
Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.
1997-05-01
Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.
The Use of Visual Arts as a Window to Diagnosing Medical Pathologies.
Bramstedt, Katrina A
2016-08-01
Observation is a key step preceding diagnosis, prognostication, and treatment. Careful patient observation is a skill that is learned but rarely explicitly taught. Furthermore, proper clinical observation requires more than a glance; it requires attention to detail. In medical school, the art of learning to look can be taught using the medical humanities and especially visual arts such as paintings and film. Research shows that such training improves not only observation skills but also teamwork, listening skills, and reflective and analytical thinking. Overall, the use of visual arts in medical school curricula can build visual literacy: the capacity to identify and analyze facial features, emotions, and general bodily presentations, including contextual features such as clothing, hair, and body art. With the ability to formulate and convey a detailed "picture" of the patient, clinicians can integrate aesthetic and clinical knowledge, helping facilitate the diagnosing of medical pathologies. © 2016 American Medical Association. All Rights Reserved.
Visualizing Parallel Computer System Performance
NASA Technical Reports Server (NTRS)
Malony, Allen D.; Reed, Daniel A.
1988-01-01
Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.
fMRI evidence for areas that process surface gloss in the human visual cortex
Sun, Hua-Chun; Ban, Hiroshi; Di Luca, Massimiliano; Welchman, Andrew E.
2015-01-01
Surface gloss is an important cue to the material properties of objects. Recent progress in the study of macaque’s brain has increased our understating of the areas involved in processing information about gloss, however the homologies with the human brain are not yet fully understood. Here we used human functional magnetic resonance imaging (fMRI) measurements to localize brain areas preferentially responding to glossy objects. We measured cortical activity for thirty-two rendered three-dimensional objects that had either Lambertian or specular surface properties. To control for differences in image structure, we overlaid a grid on the images and scrambled its cells. We found activations related to gloss in the posterior fusiform sulcus (pFs) and in area V3B/KO. Subsequent analysis with Granger causality mapping indicated that V3B/KO processes gloss information differently than pFs. Our results identify a small network of mid-level visual areas whose activity may be important in supporting the perception of surface gloss. PMID:25490434
Integrating Human and Ecosystem Health Through Ecosystem Services Frameworks.
Ford, Adriana E S; Graham, Hilary; White, Piran C L
2015-12-01
The pace and scale of environmental change is undermining the conditions for human health. Yet the environment and human health remain poorly integrated within research, policy and practice. The ecosystem services (ES) approach provides a way of promoting integration via the frameworks used to represent relationships between environment and society in simple visual forms. To assess this potential, we undertook a scoping review of ES frameworks and assessed how each represented seven key dimensions, including ecosystem and human health. Of the 84 ES frameworks identified, the majority did not include human health (62%) or include feedback mechanisms between ecosystems and human health (75%). While ecosystem drivers of human health are included in some ES frameworks, more comprehensive frameworks are required to drive forward research and policy on environmental change and human health.
A spatio-temporal model of the human observer for use in display design
NASA Astrophysics Data System (ADS)
Bosman, Dick
1989-08-01
A "quick look" visual model, a kind of standard observer in software, is being developed to estimate the appearance of new display designs before prototypes are built. It operates on images also stored in software. It is assumed that the majority of display design flaws and technology artefacts can be identified in representations of early visual processing, and insight obtained into very local to global (supra-threshold) brightness distributions. Cognitive aspects are not considered because it seems that poor acceptance of technology and design is only weakly coupled to image content.
Automated detection of solar eruptions
NASA Astrophysics Data System (ADS)
Hurlburt, N.
2015-12-01
Observation of the solar atmosphere reveals a wide range of motions, from small scale jets and spicules to global-scale coronal mass ejections (CMEs). Identifying and characterizing these motions are essential to advancing our understanding of the drivers of space weather. Both automated and visual identifications are currently used in identifying Coronal Mass Ejections. To date, eruptions near the solar surface, which may be precursors to CMEs, have been identified primarily by visual inspection. Here we report on Eruption Patrol (EP): a software module that is designed to automatically identify eruptions from data collected by the Atmospheric Imaging Assembly on the Solar Dynamics Observatory (SDO/AIA). We describe the method underlying the module and compare its results to previous identifications found in the Heliophysics Event Knowledgebase. EP identifies eruptions events that are consistent with those found by human annotations, but in a significantly more consistent and quantitative manner. Eruptions are found to be distributed within 15 Mm of the solar surface. They possess peak speeds ranging from 4 to 100 km/s and display a power-law probability distribution over that range. These characteristics are consistent with previous observations of prominences.
Neural theory for the perception of causal actions.
Fleischer, Falk; Christensen, Andrea; Caggiano, Vittorio; Thier, Peter; Giese, Martin A
2012-07-01
The efficient prediction of the behavior of others requires the recognition of their actions and an understanding of their action goals. In humans, this process is fast and extremely robust, as demonstrated by classical experiments showing that human observers reliably judge causal relationships and attribute interactive social behavior to strongly simplified stimuli consisting of simple moving geometrical shapes. While psychophysical experiments have identified critical visual features that determine the perception of causality and agency from such stimuli, the underlying detailed neural mechanisms remain largely unclear, and it is an open question why humans developed this advanced visual capability at all. We created pairs of naturalistic and abstract stimuli of hand actions that were exactly matched in terms of their motion parameters. We show that varying critical stimulus parameters for both stimulus types leads to very similar modulations of the perception of causality. However, the additional form information about the hand shape and its relationship with the object supports more fine-grained distinctions for the naturalistic stimuli. Moreover, we show that a physiologically plausible model for the recognition of goal-directed hand actions reproduces the observed dependencies of causality perception on critical stimulus parameters. These results support the hypothesis that selectivity for abstract action stimuli might emerge from the same neural mechanisms that underlie the visual processing of natural goal-directed action stimuli. Furthermore, the model proposes specific detailed neural circuits underlying this visual function, which can be evaluated in future experiments.
Jensen, Greg; Terrace, Herbert
2017-01-01
Humans are highly adept at categorizing visual stimuli, but studies of human categorization are typically validated by verbal reports. This makes it difficult to perform comparative studies of categorization using non-human animals. Interpretation of comparative studies is further complicated by the possibility that animal performance may merely reflect reinforcement learning, whereby discrete features act as discriminative cues for categorization. To assess and compare how humans and monkeys classified visual stimuli, we trained 7 rhesus macaques and 41 human volunteers to respond, in a specific order, to four simultaneously presented stimuli at a time, each belonging to a different perceptual category. These exemplars were drawn at random from large banks of images, such that the stimuli presented changed on every trial. Subjects nevertheless identified and ordered these changing stimuli correctly. Three monkeys learned to order naturalistic photographs; four others, close-up sections of paintings with distinctive styles. Humans learned to order both types of stimuli. All subjects classified stimuli at levels substantially greater than that predicted by chance or by feature-driven learning alone, even when stimuli changed on every trial. However, humans more closely resembled monkeys when classifying the more abstract painting stimuli than the photographic stimuli. This points to a common classification strategy in both species, one that humans can rely on in the absence of linguistic labels for categories. PMID:28961270
Theories of Visual Rhetoric: Looking at the Human Genome.
ERIC Educational Resources Information Center
Rosner, Mary
2001-01-01
Considers how visuals are constructions that are products of a writer's interpretation with its own "power-laden agenda." Reviews the current approach taken by composition scholars, surveys richer interdisciplinary work on visuals, and (by using visuals connected with the Human Genome Project) models an analysis of visuals as rhetoric.…
Visual memory, the long and the short of it: A review of visual working memory and long-term memory.
Schurgin, Mark W
2018-04-23
The majority of research on visual memory has taken a compartmentalized approach, focusing exclusively on memory over shorter or longer durations, that is, visual working memory (VWM) or visual episodic long-term memory (VLTM), respectively. This tutorial provides a review spanning the two areas, with readers in mind who may only be familiar with one or the other. The review is divided into six sections. It starts by distinguishing VWM and VLTM from one another, in terms of how they are generally defined and their relative functions. This is followed by a review of the major theories and methods guiding VLTM and VWM research. The final section is devoted toward identifying points of overlap and distinction across the two literatures to provide a synthesis that will inform future research in both fields. By more intimately relating methods and theories from VWM and VLTM to one another, new advances can be made that may shed light on the kinds of representational content and structure supporting human visual memory.
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
Púčik, Jozef; Šaling, Marián; Lukáč, Tomáš; Ondráček, Oldřich; Kucharík, Martin
2014-01-01
Ability of humans to maintain balance in an upright stance and during movement activities is one of the most natural skills affecting everyday life. This ability progressively deteriorates with increasing age, and balance impairment, often aggravated by age-related diseases, can result in falls that adversely impact the quality of life. Falls represent serious problems of health concern associated with aging. Many investigators, involved in different science disciplines such as medicine, engineering, psychology, and sport, have been attracted by a research of the human upright stance. In a clinical practice, stabilometry based on the force plate is the most widely available procedure used to evaluate the balance. In this paper, we have proposed a low-cost extension of the conventional stabilometry by the multimedia technology that allows identifying potentially disturbing effects of visual sensory information. Due to the proposed extension, a stabilometric assessment in terms of line integral of center of pressure (COP) during moving scene stimuli shows higher discrimination power between young healthy and elderly subjects with supposed stronger visual reliance.
Púčik, Jozef; Šaling, Marián; Lukáč, Tomáš; Ondráček, Oldřich; Kucharík, Martin
2014-01-01
Ability of humans to maintain balance in an upright stance and during movement activities is one of the most natural skills affecting everyday life. This ability progressively deteriorates with increasing age, and balance impairment, often aggravated by age-related diseases, can result in falls that adversely impact the quality of life. Falls represent serious problems of health concern associated with aging. Many investigators, involved in different science disciplines such as medicine, engineering, psychology, and sport, have been attracted by a research of the human upright stance. In a clinical practice, stabilometry based on the force plate is the most widely available procedure used to evaluate the balance. In this paper, we have proposed a low-cost extension of the conventional stabilometry by the multimedia technology that allows identifying potentially disturbing effects of visual sensory information. Due to the proposed extension, a stabilometric assessment in terms of line integral of center of pressure (COP) during moving scene stimuli shows higher discrimination power between young healthy and elderly subjects with supposed stronger visual reliance. PMID:27006930
Lagas, Alice K.; Black, Joanna M.; Byblow, Winston D.; Fleming, Melanie K.; Goodman, Lucy K.; Kydd, Robert R.; Russell, Bruce R.; Stinear, Cathy M.; Thompson, Benjamin
2016-01-01
The selective serotonin reuptake inhibitor fluoxetine significantly enhances adult visual cortex plasticity within the rat. This effect is related to decreased gamma-aminobutyric acid (GABA) mediated inhibition and identifies fluoxetine as a potential agent for enhancing plasticity in the adult human brain. We tested the hypothesis that fluoxetine would enhance visual perceptual learning of a motion direction discrimination (MDD) task in humans. We also investigated (1) the effect of fluoxetine on visual and motor cortex excitability and (2) the impact of increased GABA mediated inhibition following a single dose of triazolam on post-training MDD task performance. Within a double blind, placebo controlled design, 20 healthy adult participants completed a 19-day course of fluoxetine (n = 10, 20 mg per day) or placebo (n = 10). Participants were trained on the MDD task over the final 5 days of fluoxetine administration. Accuracy for the trained MDD stimulus and an untrained MDD stimulus configuration was assessed before and after training, after triazolam and 1 week after triazolam. Motor and visual cortex excitability were measured using transcranial magnetic stimulation. Fluoxetine did not enhance the magnitude or rate of perceptual learning and full transfer of learning to the untrained stimulus was observed for both groups. After training was complete, trazolam had no effect on trained task performance but significantly impaired untrained task performance. No consistent effects of fluoxetine on cortical excitability were observed. The results do not support the hypothesis that fluoxetine can enhance learning in humans. However, the specific effect of triazolam on MDD task performance for the untrained stimulus suggests that learning and learning transfer rely on dissociable neural mechanisms. PMID:27807412
Joshi, Ashish; de Araujo Novaes, Magdala; Machiavelli, Josiane; Iyengar, Sriram; Vogler, Robert; Johnson, Craig; Zhang, Jiajie; Hsu, Chiehwen E
2012-01-01
Public health data is typically organized by geospatial unit. GeoVisualization (GeoVis) allows users to see information visually on a map. Examine telehealth users' perceptions towards existing public health GeoVis applications and obtains users' feedback about features important for the design and development of Human Centered GeoVis application "the SanaViz". We employed a cross sectional study design using mixed methods approach for this pilot study. Twenty users involved with the NUTES telehealth center at Federal University of Pernambuco (UFPE), Recife, Brazil were enrolled. Open and closed ended questionnaires were used to gather data. We performed audio recording for the interviews. Information gathered included socio-demographics, prior spatial skills and perception towards use of GeoVis to evaluate telehealth services. Card sorting and sketching methods were employed. Univariate analysis was performed for the continuous and categorical variables. Qualitative analysis was performed for open ended questions. Existing Public Health GeoVis applications were difficult to use. Results found interaction features zooming, linking and brushing and representation features Google maps, tables and bar chart as most preferred GeoVis features. Early involvement of users is essential to identify features necessary to be part of the human centered GeoVis application "the SanaViz".
Kottlow, Mara; Jann, Kay; Dierks, Thomas; Koenig, Thomas
2012-08-01
Gamma zero-lag phase synchronization has been measured in the animal brain during visual binding. Human scalp EEG studies used a phase locking factor (trial-to-trial phase-shift consistency) or gamma amplitude to measure binding but did not analyze common-phase signals so far. This study introduces a method to identify networks oscillating with near zero-lag phase synchronization in human subjects. We presented unpredictably moving face parts (NOFACE) which - during some periods - produced a complete schematic face (FACE). The amount of zero-lag phase synchronization was measured using global field synchronization (GFS). GFS provides global information on the amount of instantaneous coincidences in specific frequencies throughout the brain. Gamma GFS was increased during the FACE condition. To localize the underlying areas, we correlated gamma GFS with simultaneously recorded BOLD responses. Positive correlates comprised the bilateral middle fusiform gyrus and the left precuneus. These areas may form a network of areas transiently synchronized during face integration, including face-specific as well as binding-specific regions and regions for visual processing in general. Thus, the amount of zero-lag phase synchronization between remote regions of the human visual system can be measured with simultaneously acquired EEG/fMRI. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Dynamic optical projection of acquired luminescence for aiding oncologic surgery
NASA Astrophysics Data System (ADS)
Sarder, Pinaki; Gullicksrud, Kyle; Mondal, Suman; Sudlow, Gail P.; Achilefu, Samuel; Akers, Walter J.
2013-12-01
Optical imaging enables real-time visualization of intrinsic and exogenous contrast within biological tissues. Applications in human medicine have demonstrated the power of fluorescence imaging to enhance visualization in dermatology, endoscopic procedures, and open surgery. Although few optical contrast agents are available for human medicine at this time, fluorescence imaging is proving to be a powerful tool in guiding medical procedures. Recently, intraoperative detection of fluorescent molecular probes that target cell-surface receptors has been reported for improvement in oncologic surgery in humans. We have developed a novel system, optical projection of acquired luminescence (OPAL), to further enhance real-time guidance of open oncologic surgery. In this method, collected fluorescence intensity maps are projected onto the imaged surface rather than via wall-mounted display monitor. To demonstrate proof-of-principle for OPAL applications in oncologic surgery, lymphatic transport of indocyanine green was visualized in live mice for intraoperative identification of sentinel lymph nodes. Subsequently, peritoneal tumors in a murine model of breast cancer metastasis were identified using OPAL after systemic administration of a tumor-selective fluorescent molecular probe. These initial results clearly show that OPAL can enhance adoption and ease-of-use of fluorescence imaging in oncologic procedures relative to existing state-of-the-art intraoperative imaging systems.
Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors
NASA Astrophysics Data System (ADS)
Quellec, Gwénolé; Abràmoff, Michael D.; Russell, Stephen R.
2010-03-01
The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are comparable or better than the predictions of human experts.
Corney, David; Haynes, John-Dylan; Rees, Geraint; Lotto, R. Beau
2009-01-01
Background The perception of brightness depends on spatial context: the same stimulus can appear light or dark depending on what surrounds it. A less well-known but equally important contextual phenomenon is that the colour of a stimulus can also alter its brightness. Specifically, stimuli that are more saturated (i.e. purer in colour) appear brighter than stimuli that are less saturated at the same luminance. Similarly, stimuli that are red or blue appear brighter than equiluminant yellow and green stimuli. This non-linear relationship between stimulus intensity and brightness, called the Helmholtz-Kohlrausch (HK) effect, was first described in the nineteenth century but has never been explained. Here, we take advantage of the relative simplicity of this ‘illusion’ to explain it and contextual effects more generally, by using a simple Bayesian ideal observer model of the human visual ecology. We also use fMRI brain scans to identify the neural correlates of brightness without changing the spatial context of the stimulus, which has complicated the interpretation of related fMRI studies. Results Rather than modelling human vision directly, we use a Bayesian ideal observer to model human visual ecology. We show that the HK effect is a result of encoding the non-linear statistical relationship between retinal images and natural scenes that would have been experienced by the human visual system in the past. We further show that the complexity of this relationship is due to the response functions of the cone photoreceptors, which themselves are thought to represent an efficient solution to encoding the statistics of images. Finally, we show that the locus of the response to the relationship between images and scenes lies in the primary visual cortex (V1), if not earlier in the visual system, since the brightness of colours (as opposed to their luminance) accords with activity in V1 as measured with fMRI. Conclusions The data suggest that perceptions of brightness represent a robust visual response to the likely sources of stimuli, as determined, in this instance, by the known statistical relationship between scenes and their retinal responses. While the responses of the early visual system (receptors in this case) may represent specifically the statistics of images, post receptor responses are more likely represent the statistical relationship between images and scenes. A corollary of this suggestion is that the visual cortex is adapted to relate the retinal image to behaviour given the statistics of its past interactions with the sources of retinal images: the visual cortex is adapted to the signals it receives from the eyes, and not directly to the world beyond. PMID:19333398
Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico
2012-07-24
The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.
Wilkinson, Krista M; Light, Janice
2011-12-01
Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs. However, many VSDs omit human figures. In this study, the authors sought to describe the distribution of visual attention to humans in naturalistic scenes as compared with other elements. Nineteen college students observed 8 photographs in which a human figure appeared near 1 or more items that might be expected to compete for visual attention (such as a Christmas tree or a table loaded with food). Eye-tracking technology allowed precise recording of participants' gaze. The fixation duration over a 7-s viewing period and latency to view elements in the photograph were measured. Participants fixated on the human figures more rapidly and for longer than expected based on the size of these figures, regardless of the other elements in the scene. Human figures attract attention in a photograph even when presented alongside other attractive distracters. Results suggest that humans may be a powerful means to attract visual attention to key elements in VSDs.
Information visualization: Beyond traditional engineering
NASA Technical Reports Server (NTRS)
Thomas, James J.
1995-01-01
This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.
ERIC Educational Resources Information Center
Kalbfleisch, M. Layne; Gillmarten, Charles
2013-01-01
As neuroimaging technologies increase their sensitivity to assess the function of the human brain and results from these studies draw the attention of educators, it becomes paramount to identify misconceptions about what these data illustrate and how these findings might be applied to educational contexts. Some of these "neuromyths" have…
ERIC Educational Resources Information Center
Giacobe, Nicklaus A.
2013-01-01
Cyber-security involves the monitoring a complex network of inter-related computers to prevent, identify and remediate from undesired actions. This work is performed in organizations by human analysts. These analysts monitor cyber-security sensors to develop and maintain situation awareness (SA) of both normal and abnormal activities that occur on…
Boström, Jan; Elger, Christian E.; Mormann, Florian
2016-01-01
Recording extracellulary from neurons in the brains of animals in vivo is among the most established experimental techniques in neuroscience, and has recently become feasible in humans. Many interesting scientific questions can be addressed only when extracellular recordings last several hours, and when individual neurons are tracked throughout the entire recording. Such questions regard, for example, neuronal mechanisms of learning and memory consolidation, and the generation of epileptic seizures. Several difficulties have so far limited the use of extracellular multi-hour recordings in neuroscience: Datasets become huge, and data are necessarily noisy in clinical recording environments. No methods for spike sorting of such recordings have been available. Spike sorting refers to the process of identifying the contributions of several neurons to the signal recorded in one electrode. To overcome these difficulties, we developed Combinato: a complete data-analysis framework for spike sorting in noisy recordings lasting twelve hours or more. Our framework includes software for artifact rejection, automatic spike sorting, manual optimization, and efficient visualization of results. Our completely automatic framework excels at two tasks: It outperforms existing methods when tested on simulated and real data, and it enables researchers to analyze multi-hour recordings. We evaluated our methods on both short and multi-hour simulated datasets. To evaluate the performance of our methods in an actual neuroscientific experiment, we used data from from neurosurgical patients, recorded in order to identify visually responsive neurons in the medial temporal lobe. These neurons responded to the semantic content, rather than to visual features, of a given stimulus. To test our methods with multi-hour recordings, we made use of neurons in the human medial temporal lobe that respond selectively to the same stimulus in the evening and next morning. PMID:27930664
Preparation for the Implantation of an Intracortical Visual Prosthesis in a Human
2014-10-01
Intracortical Visual Prosthesis in a Human PRINCIPAL INVESTIGATOR: Philip R Troyk, PhD... Prosthesis in a Human 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-12-1-0394 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Philip R Troyk...visual prosthesis (ICVP) for testing in a human. No human trial testing of the prosthesis will occur under the funded work. Preparatory tasks include
Ocular input for human melatonin regulation: relevance to breast cancer
NASA Technical Reports Server (NTRS)
Glickman, Gena; Levin, Robert; Brainard, George C.
2002-01-01
The impact of breast cancer on women across the world has been extensive and severe. As prevalence of breast cancer is greatest in industrialized regions, exposure to light at night has been proposed as a potential risk factor. This theory is supported by the epidemiological observations of decreased breast cancer in blind women and increased breast cancer in women who do shift-work. In addition, human, animal and in vitro studies which have investigated the melatonin-cancer dynamic indicate an apparent relationship between light, melatonin and cancer, albeit complex. Recent developments in understanding melatonin regulation by light in humans are examined, with particular attention to factors that contribute to the sensitivity of the light-induced melatonin suppression response. Specifically, the role of spectral characteristics of light is addressed, and recent relevant action spectrum studies in humans and other mammalian species are discussed. Across five action spectra for circadian and other non-visual responses, a peak sensitivity between 446-484 nm was identified. Under highly controlled exposure circumstances, less than 1 lux of monochromatic light elicited a significant suppression of nocturnal melatonin. In view of the possible link between light exposure, melatonin suppression and cancer risk, it is important to continue to identify the basic related ocular physiology. Visual performance, rather than circadian function, has been the primary focus of architectural lighting systems. It is now necessary to reevaluate lighting strategies, with consideration of circadian influences, in an effort to maximize physiological homeostasis and health.
Ocular input for human melatonin regulation: relevance to breast cancer.
Glickman, Gena; Levin, Robert; Brainard, George C
2002-07-01
The impact of breast cancer on women across the world has been extensive and severe. As prevalence of breast cancer is greatest in industrialized regions, exposure to light at night has been proposed as a potential risk factor. This theory is supported by the epidemiological observations of decreased breast cancer in blind women and increased breast cancer in women who do shift-work. In addition, human, animal and in vitro studies which have investigated the melatonin-cancer dynamic indicate an apparent relationship between light, melatonin and cancer, albeit complex. Recent developments in understanding melatonin regulation by light in humans are examined, with particular attention to factors that contribute to the sensitivity of the light-induced melatonin suppression response. Specifically, the role of spectral characteristics of light is addressed, and recent relevant action spectrum studies in humans and other mammalian species are discussed. Across five action spectra for circadian and other non-visual responses, a peak sensitivity between 446-484 nm was identified. Under highly controlled exposure circumstances, less than 1 lux of monochromatic light elicited a significant suppression of nocturnal melatonin. In view of the possible link between light exposure, melatonin suppression and cancer risk, it is important to continue to identify the basic related ocular physiology. Visual performance, rather than circadian function, has been the primary focus of architectural lighting systems. It is now necessary to reevaluate lighting strategies, with consideration of circadian influences, in an effort to maximize physiological homeostasis and health.
Ahmed, N; Zheng, Ziyi; Mueller, K
2012-12-01
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled "Disguise" which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
Differential processing of binocular and monocular gloss cues in human visual cortex
Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.
2016-01-01
The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596
Latent binocular function in amblyopia.
Chadnova, Eva; Reynaud, Alexandre; Clavagnier, Simon; Hess, Robert F
2017-11-01
Recently, psychophysical studies have shown that humans with amblyopia do have binocular function that is not normally revealed due to dominant suppressive interactions under normal viewing conditions. Here we use magnetoencephalography (MEG) combined with dichoptic visual stimulation to investigate the underlying binocular function in humans with amblyopia for stimuli that, because of their temporal properties, would be expected to bypass suppressive effects and to reveal any underlying binocular function. We recorded contrast response functions in visual cortical area V1 of amblyopes and normal observers using a steady state visually evoked responses (SSVER) protocol. We used stimuli that were frequency-tagged at 4Hz and 6Hz that allowed identification of the responses from each eye and were of a sufficiently high temporal frequency (>3Hz) to bypass suppression. To characterize binocular function, we compared dichoptic masking between the two eyes in normal and amblyopic participants as well as interocular phase differences in the two groups. We observed that the primary visual cortex responds less to the stimulation of the amblyopic eye compared to the fellow eye. The pattern of interaction in the amblyopic visual system however was not significantly different between the amblyopic and fellow eyes. However, the amblyopic suppressive interactions were lower than those observed in the binocular system of our normal observers. Furthermore, we identified an interocular processing delay of approximately 20ms in our amblyopic group. To conclude, when suppression is greatly reduced, such as the case with our stimulation above 3Hz, the amblyopic visual system exhibits a lack of binocular interactions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Coordinates of Human Visual and Inertial Heading Perception.
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.
Coordinates of Human Visual and Inertial Heading Perception
Crane, Benjamin Thomas
2015-01-01
Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865
Preparation for the Implantation of an Intracortical Visual Prosthesis in a Human
2013-10-01
Intracortical Visual Prosthesis in a Human PRINCIPAL INVESTIGATOR: Philip R Troyk, PhD... Prosthesis in a Human 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-12-1-0394 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Philip R Troyk, PhD...to prepare an intracortical visual prosthesis (ICVP) for testing in a human. No human trial testing of the prosthesis will occur under the funded
Visual Culture, Art History and the Humanities
ERIC Educational Resources Information Center
Castaneda, Ivan
2009-01-01
This essay will discuss the need for the humanities to address visual culture studies as part of its interdisciplinary mission in today's university. Although mostly unnoticed in recent debates in the humanities over historical and theoretical frameworks, the relatively new field of visual culture has emerged as a corrective to a growing…
The Human is the Loop: New Directions for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren
2014-01-28
Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.
Common Visual Preference for Curved Contours in Humans and Great Apes.
Munar, Enric; Gómez-Puerto, Gerardo; Call, Josep; Nadal, Marcos
2015-01-01
Among the visual preferences that guide many everyday activities and decisions, from consumer choices to social judgment, preference for curved over sharp-angled contours is commonly thought to have played an adaptive role throughout human evolution, favoring the avoidance of potentially harmful objects. However, because nonhuman primates also exhibit preferences for certain visual qualities, it is conceivable that humans' preference for curved contours is grounded on perceptual and cognitive mechanisms shared with extant nonhuman primate species. Here we aimed to determine whether nonhuman great apes and humans share a visual preference for curved over sharp-angled contours using a 2-alternative forced choice experimental paradigm under comparable conditions. Our results revealed that the human group and the great ape group indeed share a common preference for curved over sharp-angled contours, but that they differ in the manner and magnitude with which this preference is expressed behaviorally. These results suggest that humans' visual preference for curved objects evolved from earlier primate species' visual preferences, and that during this process it became stronger, but also more susceptible to the influence of higher cognitive processes and preference for other visual features.
TelCoVis: Visual Exploration of Co-occurrence in Urban Human Mobility Based on Telco Data.
Wu, Wenchao; Xu, Jiayi; Zeng, Haipeng; Zheng, Yixian; Qu, Huamin; Ni, Bing; Yuan, Mingxuan; Ni, Lionel M
2016-01-01
Understanding co-occurrence in urban human mobility (i.e. people from two regions visit an urban place during the same time span) is of great value in a variety of applications, such as urban planning, business intelligence, social behavior analysis, as well as containing contagious diseases. In recent years, the widespread use of mobile phones brings an unprecedented opportunity to capture large-scale and fine-grained data to study co-occurrence in human mobility. However, due to the lack of systematic and efficient methods, it is challenging for analysts to carry out in-depth analyses and extract valuable information. In this paper, we present TelCoVis, an interactive visual analytics system, which helps analysts leverage their domain knowledge to gain insight into the co-occurrence in urban human mobility based on telco data. Our system integrates visualization techniques with new designs and combines them in a novel way to enhance analysts' perception for a comprehensive exploration. In addition, we propose to study the correlations in co-occurrence (i.e. people from multiple regions visit different places during the same time span) by means of biclustering techniques that allow analysts to better explore coordinated relationships among different regions and identify interesting patterns. The case studies based on a real-world dataset and interviews with domain experts have demonstrated the effectiveness of our system in gaining insights into co-occurrence and facilitating various analytical tasks.
Monkey cortex through fMRI glasses
Vanduffel, Wim; Zhu, Qi; Orban, Guy A.
2015-01-01
In 1998 several groups reported the feasibility of functional magnetic resonance imaging (fMRI) experiments in monkeys, with the goal to bridge the gap between invasive nonhuman primate studies and human functional imaging. These studies yielded critical insights in the neuronal underpinnings of the BOLD signal. Furthermore, the technology has been successful in guiding electrophysiological recordings and identifying focal perturbation targets. Finally, invaluable information was obtained concerning human brain evolution. We here provide a comprehensive overview of awake monkey fMRI studies mainly confined to the visual system. We review the latest insights about the topographic organization of monkey visual cortex and discuss the spatial relationships between retinotopy and category and feature selective clusters. We briefly discuss the functional layout of parietal and frontal cortex and continue with a summary of some fascinating functional and effective connectivity studies. Finally, we review recent comparative fMRI experiments and speculate about the future of nonhuman primate imaging. PMID:25102559
Monkey cortex through fMRI glasses.
Vanduffel, Wim; Zhu, Qi; Orban, Guy A
2014-08-06
In 1998 several groups reported the feasibility of fMRI experiments in monkeys, with the goal to bridge the gap between invasive nonhuman primate studies and human functional imaging. These studies yielded critical insights in the neuronal underpinnings of the BOLD signal. Furthermore, the technology has been successful in guiding electrophysiological recordings and identifying focal perturbation targets. Finally, invaluable information was obtained concerning human brain evolution. We here provide a comprehensive overview of awake monkey fMRI studies mainly confined to the visual system. We review the latest insights about the topographic organization of monkey visual cortex and discuss the spatial relationships between retinotopy and category- and feature-selective clusters. We briefly discuss the functional layout of parietal and frontal cortex and continue with a summary of some fascinating functional and effective connectivity studies. Finally, we review recent comparative fMRI experiments and speculate about the future of nonhuman primate imaging. Copyright © 2014 Elsevier Inc. All rights reserved.
Reward associations impact both iconic and visual working memory.
Infanti, Elisa; Hickey, Clayton; Turatto, Massimo
2015-02-01
Reward plays a fundamental role in human behavior. A growing number of studies have shown that stimuli associated with reward become salient and attract attention. The aim of the present study was to extend these results into the investigation of iconic memory and visual working memory. In two experiments we asked participants to perform a visual-search task where different colors of the target stimuli were paired with high or low reward. We then tested whether the pre-established feature-reward association affected performance on a subsequent visual memory task, in which no reward was provided. In this test phase participants viewed arrays of 8 objects, one of which had unique color that could match the color associated with reward during the previous visual-search task. A probe appeared at varying intervals after stimulus offset to identify the to-be-reported item. Our results suggest that reward biases the encoding of visual information such that items characterized by a reward-associated feature interfere with mnemonic representations of other items in the test display. These results extend current knowledge regarding the influence of reward on early cognitive processes, suggesting that feature-reward associations automatically interact with the encoding and storage of visual information, both in iconic memory and visual working memory. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dynamics of normalization underlying masking in human visual cortex.
Tsai, Jeffrey J; Wade, Alex R; Norcia, Anthony M
2012-02-22
Stimulus visibility can be reduced by other stimuli that overlap the same region of visual space, a process known as masking. Here we studied the neural mechanisms of masking in humans using source-imaged steady state visual evoked potentials and frequency-domain analysis over a wide range of relative stimulus strengths of test and mask stimuli. Test and mask stimuli were tagged with distinct temporal frequencies and we quantified spectral response components associated with the individual stimuli (self terms) and responses due to interaction between stimuli (intermodulation terms). In early visual cortex, masking alters the self terms in a manner consistent with a reduction of input contrast. We also identify a novel signature of masking: a robust intermodulation term that peaks when the test and mask stimuli have equal contrast and disappears when they are widely different. We fit all of our data simultaneously with family of a divisive gain control models that differed only in their dynamics. Models with either very short or very long temporal integration constants for the gain pool performed worse than a model with an integration time of ∼30 ms. Finally, the absolute magnitudes of the response were controlled by the ratio of the stimulus contrasts, not their absolute values. This contrast-contrast invariance suggests that many neurons in early visual cortex code relative rather than absolute contrast. Together, these results provide a more complete description of masking within the normalization framework of contrast gain control and suggest that contrast normalization accomplishes multiple functional goals.
Fields, Chris
2011-01-01
The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially disconnected stimuli as continuously existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated “trajectory recognition networks” downstream of the medial–temporal motion-detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction, and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders. PMID:21716599
Mechanisms of Photoreceptor Patterning in Vertebrates and Invertebrates.
Viets, Kayla; Eldred, Kiara; Johnston, Robert J
2016-10-01
Across the animal kingdom, visual systems have evolved to be uniquely suited to the environments and behavioral patterns of different species. Visual acuity and color perception depend on the distribution of photoreceptor (PR) subtypes within the retina. Retinal mosaics can be organized into three broad categories: stochastic/regionalized, regionalized, and ordered. We describe here the retinal mosaics of flies, zebrafish, chickens, mice, and humans, and the gene regulatory networks controlling proper PR specification in each. By drawing parallels in eye development between these divergent species, we identify a set of conserved organizing principles and transcriptional networks that govern PR subtype differentiation. Copyright © 2016 Elsevier Ltd. All rights reserved.
The online social self: an open vocabulary approach to personality.
Kern, Margaret L; Eichstaedt, Johannes C; Schwartz, H Andrew; Dziurzynski, Lukasz; Ungar, Lyle H; Stillwell, David J; Kosinski, Michal; Ramones, Stephanie M; Seligman, Martin E P
2014-04-01
We present a new open language analysis approach that identifies and visually summarizes the dominant naturally occurring words and phrases that most distinguished each Big Five personality trait. Using millions of posts from 69,792 Facebook users, we examined the correlation of personality traits with online word usage. Our analysis method consists of feature extraction, correlational analysis, and visualization. The distinguishing words and phrases were face valid and provide insight into processes that underlie the Big Five traits. Open-ended data driven exploration of large datasets combined with established psychological theory and measures offers new tools to further understand the human psyche. © The Author(s) 2013.
Mechanisms of photoreceptor patterning in vertebrates and invertebrates
Johnston, Robert J
2016-01-01
Across the animal kingdom, visual systems have evolved to be uniquely suited to the environments and behavioral patterns of different species. The visual acuity and color perception of organisms depend on the distribution of photoreceptor subtypes within the retina. Retinal mosaics can be organized into three broad categories: stochastic/regionalized, regionalized, and ordered. Here, we describe the retinal mosaics of flies, zebrafish, chickens, mice, and humans and the gene regulatory networks controlling proper photoreceptor specification in each. By drawing parallels in eye development between these divergent species, we identify a set of conserved organizing principles and transcriptional networks that govern photoreceptor subtype differentiation. PMID:27615122
Light-weight analyzer for odor recognition
Vass, Arpad A; Wise, Marcus B
2014-05-20
The invention provides a light weight analyzer, e.g., detector, capable of locating clandestine graves. The detector utilizes the very specific and unique chemicals identified in the database of human decompositional odor. This detector, based on specific chemical compounds found relevant to human decomposition, is the next step forward in clandestine grave detection and will take the guess-work out of current methods using canines and ground-penetrating radar, which have historically been unreliable. The detector is self contained, portable and built for field use. Both visual and auditory cues are provided to the operator.
Targeted exploration and analysis of large cross-platform human transcriptomic compendia
Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.
2016-01-01
We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801
Human factors guidelines for applications of 3D perspectives: a literature review
NASA Astrophysics Data System (ADS)
Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise
2009-05-01
Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.
A Computational Model of Spatial Visualization Capacity
ERIC Educational Resources Information Center
Lyon, Don R.; Gunzelmann, Glenn; Gluck, Kevin A.
2008-01-01
Visualizing spatial material is a cornerstone of human problem solving, but human visualization capacity is sharply limited. To investigate the sources of this limit, we developed a new task to measure visualization accuracy for verbally-described spatial paths (similar to street directions), and implemented a computational process model to…
Systematic tracking, visualizing, and interpreting of consumer feedback for drinking water quality.
Dietrich, Andrea M; Phetxumphou, Katherine; Gallagher, Daniel L
2014-12-01
Consumer feedback and complaints provide utilities with useful data about consumer perceptions of aesthetic water quality in the distribution system. This research provides a systematic approach to interpret consumer complaint water quality data provided by four water utilities that recorded consumer complaints, but did not routinely process the data. The utilities tended to write down a myriad of descriptors that were too numerous or contained a variety of spellings so that electronic "harvesting" was not possible and much manual labor was required to categorize the complaints into majors areas, such as suggested by the Drinking Water Taste and Odor Wheel or existing check-sheets. When the consumer complaint data were categorized and visualized using spider (or radar) and run-time plots, major taste, odor, and appearance patterns emerged that clarified the issue and could provide guidance to the utility on the nature and extent of the problem. A caveat is that while humans readily identify visual issues with the water, such as color, cloudiness, or rust, describing specific tastes and odors in drinking water is acknowledged to be much more difficult for humans to achieve without training. This was demonstrated with two utility groups and a group of consumers identifying the odors of orange, 2-methylisoborneol, and dimethyl trisulfide. All three groups readily and succinctly identified the familiar orange odor. The two utility groups were much more able to identify the musty odor of 2-methylisoborneol, which was likely familiar to them from their work with raw and finished water. Dimethyl trisulfide, a garlic-onion odor associated with sulfur compounds in drinking water, was the least familiar to all three groups, although the laboratory staff did best. These results indicate that utility personnel should be tolerant of consumers who can assuredly say the water is different, but cannot describe the problem. Also, it indicates that a T&O program at a utility would benefit from identification of aesthetic issues in water. Copyright © 2014 Elsevier Ltd. All rights reserved.
Development of Glutamatergic Proteins in Human Visual Cortex across the Lifespan.
Siu, Caitlin R; Beshara, Simon P; Jones, David G; Murphy, Kathryn M
2017-06-21
Traditionally, human primary visual cortex (V1) has been thought to mature within the first few years of life, based on anatomical studies of synapse formation, and establishment of intracortical and intercortical connections. Human vision, however, develops well beyond the first few years. Previously, we found prolonged development of some GABAergic proteins in human V1 (Pinto et al., 2010). Yet as >80% of synapses in V1 are excitatory, it remains unanswered whether the majority of synapses regulating experience-dependent plasticity and receptive field properties develop late, like their inhibitory counterparts. To address this question, we used Western blotting of postmortem tissue from human V1 (12 female, 18 male) covering a range of ages. Then we quantified a set of postsynaptic glutamatergic proteins (PSD-95, GluA2, GluN1, GluN2A, GluN2B), calculated indices for functional pairs that are developmentally regulated (GluA2:GluN1; GluN2A:GluN2B), and determined interindividual variability. We found early loss of GluN1, prolonged development of PSD-95 and GluA2 into late childhood, protracted development of GluN2A until ∼40 years, and dramatic loss of GluN2A in aging. The GluA2:GluN1 index switched at ∼1 year, but the GluN2A:GluN2B index continued to shift until ∼40 year before changing back to GluN2B in aging. We also identified young childhood as a stage of heightened interindividual variability. The changes show that human V1 develops gradually through a series of five orchestrated stages, making it likely that V1 participates in visual development and plasticity across the lifespan. SIGNIFICANCE STATEMENT Anatomical structure of human V1 appears to mature early, but vision changes across the lifespan. This discrepancy has fostered two hypotheses: either other aspects of V1 continue changing, or later changes in visual perception depend on extrastriate areas. Previously, we showed that some GABAergic synaptic proteins change across the lifespan, but most synapses in V1 are excitatory leaving unanswered how they change. So we studied expression of glutamatergic proteins in human V1 to determine their development. Here we report prolonged maturation of glutamatergic proteins, with five stages that map onto life-long changes in human visual perception. Thus, the apparent discrepancy between development of structure and function may be explained by life-long synaptic changes in human V1. Copyright © 2017 the authors 0270-6474/17/376031-12$15.00/0.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.
McBride, Sebastian; Huelse, Martin; Lee, Mark
2013-01-01
Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as ‘active vision’, to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of ‘where’ and ‘what’ information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate ‘active’ visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a ‘priority map’. PMID:23437044
Performance characteristics of a visual-search human-model observer with sparse PET image data
NASA Astrophysics Data System (ADS)
Gifford, Howard C.
2012-02-01
As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.
Rehabilitation regimes based upon psychophysical studies of prosthetic vision
NASA Astrophysics Data System (ADS)
Chen, S. C.; Suaning, G. J.; Morley, J. W.; Lovell, N. H.
2009-06-01
Human trials of prototype visual prostheses have successfully elicited visual percepts (phosphenes) in the visual field of implant recipients blinded through retinitis pigmentosa and age-related macular degeneration. Researchers are progressing rapidly towards a device that utilizes individual phosphenes as the elementary building blocks to compose a visual scene. This form of prosthetic vision is expected, in the near term, to have low resolution, large inter-phosphene gaps, distorted spatial distribution of phosphenes, restricted field of view, an eccentrically located phosphene field and limited number of expressible luminance levels. In order to fully realize the potential of these devices, there needs to be a training and rehabilitation program which aims to assist the prosthesis recipients to understand what they are seeing, and also to adapt their viewing habits to optimize the performance of the device. Based on the literature of psychophysical studies in simulated and real prosthetic vision, this paper proposes a comprehensive, theoretical training regime for a prosthesis recipient: visual search, visual acuity, reading, face/object recognition, hand-eye coordination and navigation. The aim of these tasks is to train the recipients to conduct visual scanning, eccentric viewing and reading, discerning low-contrast visual information, and coordinating bodily actions for visual-guided tasks under prosthetic vision. These skills have been identified as playing an important role in making prosthetic vision functional for the daily activities of their recipients.
ViSEN: methodology and software for visualization of statistical epistasis networks
Hu, Ting; Chen, Yuanzhu; Kiralis, Jeff W.; Moore, Jason H.
2013-01-01
The non-linear interaction effect among multiple genetic factors, i.e. epistasis, has been recognized as a key component in understanding the underlying genetic basis of complex human diseases and phenotypic traits. Due to the statistical and computational complexity, most epistasis studies are limited to interactions with an order of two. We developed ViSEN to analyze and visualize epistatic interactions of both two-way and three-way. ViSEN not only identifies strong interactions among pairs or trios of genetic attributes, but also provides a global interaction map that shows neighborhood and clustering structures. This visualized information could be very helpful to infer the underlying genetic architecture of complex diseases and to generate plausible hypotheses for further biological validations. ViSEN is implemented in Java and freely available at https://sourceforge.net/projects/visen/. PMID:23468157
Spatial updating in human parietal cortex
NASA Technical Reports Server (NTRS)
Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.
2003-01-01
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.
Rethinking Visual Analytics for Streaming Data Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris
In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less
Liu, Hesheng; Agam, Yigal; Madsen, Joseph R.; Kreiman, Gabriel
2010-01-01
Summary The difficulty of visual recognition stems from the need to achieve high selectivity while maintaining robustness to object transformations within hundreds of milliseconds. Theories of visual recognition differ in whether the neuronal circuits invoke recurrent feedback connections or not. The timing of neurophysiological responses in visual cortex plays a key role in distinguishing between bottom-up and top-down theories. Here we quantified at millisecond resolution the amount of visual information conveyed by intracranial field potentials from 912 electrodes in 11 human subjects. We could decode object category information from human visual cortex in single trials as early as 100 ms post-stimulus. Decoding performance was robust to depth rotation and scale changes. The results suggest that physiological activity in the temporal lobe can account for key properties of visual recognition. The fast decoding in single trials is compatible with feed-forward theories and provides strong constraints for computational models of human vision. PMID:19409272
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
Modulation of visually evoked movement responses in moving virtual environments.
Reed-Jones, Rebecca J; Vallis, Lori Ann
2009-01-01
Virtual-reality technology is being increasingly used to understand how humans perceive and act in the moving world around them. What is currently not clear is how virtual reality technology is perceived by human participants and what virtual scenes are effective in evoking movement responses to visual stimuli. We investigated the effect of virtual-scene context on human responses to a virtual visual perturbation. We hypothesised that exposure to a natural scene that matched the visual expectancies of the natural world would create a perceptual set towards presence, and thus visual guidance of body movement in a subsequently presented virtual scene. Results supported this hypothesis; responses to a virtual visual perturbation presented in an ambiguous virtual scene were increased when participants first viewed a scene that consisted of natural landmarks which provided 'real-world' visual motion cues. Further research in this area will provide a basis of knowledge for the effective use of this technology in the study of human movement responses.
ERIC Educational Resources Information Center
Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha
2011-01-01
The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…
Visual Graphics for Human Rights, Social Justice, Democracy and the Public Good
ERIC Educational Resources Information Center
Nanackchand, Vedant; Berman, Kim
2012-01-01
The value of human rights in a democratic South Africa is constantly threatened and often waived for nefarious reasons. We contend that the use of visual graphics among incoming university visual art students provides a mode of engagement that helps to inculcate awareness of human rights, social responsibility, and the public good in South African…
Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András
2017-07-01
The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.
Improving and analyzing signage within a healthcare setting.
Rousek, J B; Hallbeck, M S
2011-11-01
Healthcare facilities are increasingly utilizing pictograms rather than text signs to help direct people. The purpose of this study was to analyze a wide variety of standardized healthcare pictograms and the effects of color contrasts and complexity for participants with both normal and impaired vision. Fifty (25 males, 25 females) participants completed a signage recognition questionnaire and identified pictograms while wearing vision simulators to represent specific visual impairment. The study showed that certain color contrasts, complexities and orientations can help or hinder comprehension of signage for people with and without visual impairment. High contrast signage with consistent pictograms involving human figures (not too detailed or too abstract) is most identifiable. Standardization of healthcare signage is recommended to speed up and aid the cognitive thought process in detecting signage and determining meaning. These fundamental signage principles are critical in producing an efficient, universal wayfinding system for healthcare facilities. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Modeling visual problem solving as analogical reasoning.
Lovett, Andrew; Forbus, Kenneth
2017-01-01
We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Polans, James; Cunefare, David; Cole, Eli; Keller, Brenton; Mettu, Priyatham S.; Cousins, Scott W.; Allingham, Michael J.; Izatt, Joseph A.; Farsiu, Sina
2017-01-01
Optical coherence tomography angiography (OCTA) is a promising technique for non-invasive visualization of vessel networks in the human eye. We debut a system capable of acquiring wide field-of-view (>70°) OCT angiograms without mosaicking. Additionally, we report on enhancing the visualization of peripheral microvasculature using wavefront sensorless adaptive optics (WSAO). We employed a fast WSAO algorithm that enabled wavefront correction in <2 seconds by iterating the mirror shape at the speed of OCT B-scans rather than volumes. Also, we contrasted ~7° field-of-view OCTA angiograms acquired in the periphery with and without WSAO correction. On average, WSAO improved the sharpness of microvasculature by 65% in healthy and 38% in diseased eyes. Preliminary observations demonstrated that the location of 7° images could be identified directly from the wide field-of-view angiogram. A pilot study on a normal subject and patients with diabetic retinopathy showed the impact of utilizing WSAO for OCTA when visualizing peripheral vasculature pathologies. PMID:28059209
NASA Technical Reports Server (NTRS)
Raghunandan, Sneha; Vyas, Ruchi J.; Vizzeri, Gianmarco; Taibbi, Giovanni; Zanello, Susana B.; Ploutz-Snyder, Robert; Parsons-Wingerter, Patricia A.
2016-01-01
Significant risks for visual impairment associated with increased intracranial pressure (VIIP) are incurred by microgravity spaceflight, especially long-duration missions. Impairments include decreased near visual acuity, posterior globe flattening, choroidal folds, optic disc edema and cotton wool spots. We hypothesize that microgravity-induced fluid shifts result in pathological changes within the retinal blood vessels that precede development of visual and other ocular impairments. Potential contributions of retinal vascular remodeling to VIIP etiology are therefore being investigated by NASAs innovative VESsel GENeration Analysis (VESGEN) software for two studies: (1) head-down tilt in human subjects before and after 70 days of bed rest, and (2) U.S. crew members before and after ISS missions. VESGEN analysis in previous research supported by the US National Institutes of Health identified surprising new opportunities to regenerate retinal vessels during early-stage, potentially reversible progression of the visually impairing and blinding disease, diabetic retinopathy.
Situation exploration in a persistent surveillance system with multidimensional data
NASA Astrophysics Data System (ADS)
Habibi, Mohammad S.
2013-03-01
There is an emerging need for fusing hard and soft sensor data in an efficient surveillance system to provide accurate estimation of situation awareness. These mostly abstract, multi-dimensional and multi-sensor data pose a great challenge to the user in performing analysis of multi-threaded events efficiently and cohesively. To address this concern an interactive Visual Analytics (VA) application is developed for rapid assessment and evaluation of different hypotheses based on context-sensitive ontology spawn from taxonomies describing human/human and human/vehicle/object interactions. A methodology is described here for generating relevant ontology in a Persistent Surveillance System (PSS) and demonstrates how they can be utilized in the context of PSS to track and identify group activities pertaining to potential threats. The proposed VA system allows for visual analysis of raw data as well as metadata that have spatiotemporal representation and content-based implications. Additionally in this paper, a technique for rapid search of tagged information contingent to ranking and confidence is explained for analysis of multi-dimensional data. Lastly the issue of uncertainty associated with processing and interpretation of heterogeneous data is also addressed.
Modeling human comprehension of data visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie
This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less
Human V4 Activity Patterns Predict Behavioral Performance in Imagery of Object Color.
Bannert, Michael M; Bartels, Andreas
2018-04-11
Color is special among basic visual features in that it can form a defining part of objects that are engrained in our memory. Whereas most neuroimaging research on human color vision has focused on responses related to external stimulation, the present study investigated how sensory-driven color vision is linked to subjective color perception induced by object imagery. We recorded fMRI activity in male and female volunteers during viewing of abstract color stimuli that were red, green, or yellow in half of the runs. In the other half we asked them to produce mental images of colored, meaningful objects (such as tomato, grapes, banana) corresponding to the same three color categories. Although physically presented color could be decoded from all retinotopically mapped visual areas, only hV4 allowed predicting colors of imagined objects when classifiers were trained on responses to physical colors. Importantly, only neural signal in hV4 was predictive of behavioral performance in the color judgment task on a trial-by-trial basis. The commonality between neural representations of sensory-driven and imagined object color and the behavioral link to neural representations in hV4 identifies area hV4 as a perceptual hub linking externally triggered color vision with color in self-generated object imagery. SIGNIFICANCE STATEMENT Humans experience color not only when visually exploring the outside world, but also in the absence of visual input, for example when remembering, dreaming, and during imagery. It is not known where neural codes for sensory-driven and internally generated hue converge. In the current study we evoked matching subjective color percepts, one driven by physically presented color stimuli, the other by internally generated color imagery. This allowed us to identify area hV4 as the only site where neural codes of corresponding subjective color perception converged regardless of its origin. Color codes in hV4 also predicted behavioral performance in an imagery task, suggesting it forms a perceptual hub for color perception. Copyright © 2018 the authors 0270-6474/18/383657-12$15.00/0.
Panta, Sandeep R; Wang, Runtang; Fries, Jill; Kalyanam, Ravi; Speer, Nicole; Banich, Marie; Kiehl, Kent; King, Margaret; Milham, Michael; Wager, Tor D; Turner, Jessica A; Plis, Sergey M; Calhoun, Vince D
2016-01-01
In this paper we propose a web-based approach for quick visualization of big data from brain magnetic resonance imaging (MRI) scans using a combination of an automated image capture and processing system, nonlinear embedding, and interactive data visualization tools. We draw upon thousands of MRI scans captured via the COllaborative Imaging and Neuroinformatics Suite (COINS). We then interface the output of several analysis pipelines based on structural and functional data to a t-distributed stochastic neighbor embedding (t-SNE) algorithm which reduces the number of dimensions for each scan in the input data set to two dimensions while preserving the local structure of data sets. Finally, we interactively display the output of this approach via a web-page, based on data driven documents (D3) JavaScript library. Two distinct approaches were used to visualize the data. In the first approach, we computed multiple quality control (QC) values from pre-processed data, which were used as inputs to the t-SNE algorithm. This approach helps in assessing the quality of each data set relative to others. In the second case, computed variables of interest (e.g., brain volume or voxel values from segmented gray matter images) were used as inputs to the t-SNE algorithm. This approach helps in identifying interesting patterns in the data sets. We demonstrate these approaches using multiple examples from over 10,000 data sets including (1) quality control measures calculated from phantom data over time, (2) quality control data from human functional MRI data across various studies, scanners, sites, (3) volumetric and density measures from human structural MRI data across various studies, scanners and sites. Results from (1) and (2) show the potential of our approach to combine t-SNE data reduction with interactive color coding of variables of interest to quickly identify visually unique clusters of data (i.e., data sets with poor QC, clustering of data by site) quickly. Results from (3) demonstrate interesting patterns of gray matter and volume, and evaluate how they map onto variables including scanners, age, and gender. In sum, the proposed approach allows researchers to rapidly identify and extract meaningful information from big data sets. Such tools are becoming increasingly important as datasets grow larger.
Liu, Tao; Jung, HaeWon; Liu, Jianfei; Droettboom, Michael; Tam, Johnny
2017-10-01
The retinal pigment epithelial (RPE) cells contain intrinsic fluorophores that can be visualized using infrared autofluorescence (IRAF). Although IRAF is routinely utilized in the clinic for visualizing retinal health and disease, currently, it is not possible to discern cellular details using IRAF due to limits in resolution. We demonstrate that the combination of adaptive optics (AO) with IRAF (AO-IRAF) enables higher-resolution imaging of the IRAF signal, revealing the RPE mosaic in the living human eye. Quantitative analysis of visualized RPE cells in 10 healthy subjects across various eccentricities demonstrates the possibility for in vivo density measurements of RPE cells, which range from 6505 to 5388 cells/mm 2 for the areas measured (peaking at the fovea). We also identified cone photoreceptors in relation to underlying RPE cells, and found that RPE cells support on average up to 18.74 cone photoreceptors in the fovea down to an average of 1.03 cone photoreceptors per RPE cell at an eccentricity of 6 mm. Clinical application of AO-IRAF to a patient with retinitis pigmentosa illustrates the potential for AO-IRAF imaging to become a valuable complementary approach to the current landscape of high resolution imaging modalities.
The vertical occipital fasciculus: a century of controversy resolved by in vivo measurements.
Yeatman, Jason D; Weiner, Kevin S; Pestilli, Franco; Rokem, Ariel; Mezer, Aviv; Wandell, Brian A
2014-12-02
The vertical occipital fasciculus (VOF) is the only major fiber bundle connecting dorsolateral and ventrolateral visual cortex. Only a handful of studies have examined the anatomy of the VOF or its role in cognition in the living human brain. Here, we trace the contentious history of the VOF, beginning with its original discovery in monkey by Wernicke (1881) and in human by Obersteiner (1888), to its disappearance from the literature, and recent reemergence a century later. We introduce an algorithm to identify the VOF in vivo using diffusion-weighted imaging and tractography, and show that the VOF can be found in every hemisphere (n = 74). Quantitative T1 measurements demonstrate that tissue properties, such as myelination, in the VOF differ from neighboring white-matter tracts. The terminations of the VOF are in consistent positions relative to cortical folding patterns in the dorsal and ventral visual streams. Recent findings demonstrate that these same anatomical locations also mark cytoarchitectonic and functional transitions in dorsal and ventral visual cortex. We conclude that the VOF is likely to serve a unique role in the communication of signals between regions on the ventral surface that are important for the perception of visual categories (e.g., words, faces, bodies, etc.) and regions on the dorsal surface involved in the control of eye movements, attention, and motion perception.
Differential processing of binocular and monocular gloss cues in human visual cortex.
Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E
2016-06-01
The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.
Corollary discharge contributes to perceived eye location in monkeys
Cavanaugh, James; FitzGibbon, Edmond J.; Wurtz, Robert H.
2013-01-01
Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do. PMID:23986562
Corollary discharge contributes to perceived eye location in monkeys.
Joiner, Wilsaan M; Cavanaugh, James; FitzGibbon, Edmond J; Wurtz, Robert H
2013-11-01
Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do.
ERIC Educational Resources Information Center
Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.
2012-01-01
Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…
ERIC Educational Resources Information Center
Stevens, J.A.
2005-01-01
Four experiments were completed to characterize the utilization of visual imagery and motor imagery during the mental representation of human action. In Experiment 1, movement time functions for a motor imagery human locomotion task conformed to a speed-accuracy trade-off similar to Fitts' Law, whereas those for a visual imagery object motion task…
Comparison of visual sensitivity to human and object motion in autism spectrum disorder.
Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie
2010-08-01
Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.
Real-time imaging of single neuronal cell apoptosis in patients with glaucoma
Normando, Eduardo M.; Cardoso, M. Jorge; Miodragovic, Serge; Jeylani, Seham; Davis, Benjamin M.; Guo, Li; Ourselin, Sebastien; A’Hern, Roger; Bloom, Philip A.
2017-01-01
Abstract See Herms and Schön (doi:10.1093/brain/awx100) for a scientific commentary on this article. Retinal cell apoptosis occurs in many ocular neurodegenerative conditions including glaucoma—the major cause of irreversible blindness worldwide. Using a new imaging technique that we have called DARC (detection of apoptosing retinal cells), which until now has only been demonstrated in animal models, we assessed if annexin 5 labelled with fluorescent dye DY-776 (ANX776) could be used safely in humans to identify retinal cell apoptosis. Eight patients with glaucomatous neurodegeneration and evidence of progressive disease, and eight healthy subjects were randomly assigned to intravenous ANX776 doses of 0.1, 0.2, 0.4 and 0.5 mg in an open-label, phase 1 clinical trial. In addition to assessing the safety, tolerability and pharmacokinetics of ANX776, the study aimed to explore whether DARC could successfully visualize individual retinal cell apoptosis in vivo in humans, with the DARC count defined as the total number of unique ANX776-labelled spots. DARC enabled retinal cell apoptosis to be identified in the human retina using ANX776. Single ANX776-labelled cells were visualized in a dose-dependent pattern (P < 0.001) up to 6 h after injection. The DARC count was significantly higher (2.37-fold, 95% confidence interval: 1.4–4.03, P = 0.003) in glaucoma patients compared to healthy controls, and was significantly (P = 0.045) greater in patients who later showed increasing rates of disease progression, based on either optic disc, retinal nerve fibre layer or visual field parameters. Additionally, the DARC count significantly correlated with decreased central corneal thickness (Spearman’s R = −0.68, P = 0.006) and increased cup-disc ratios (Spearman’s R = 0.47, P = 0.038) in glaucoma patients and with increased age (Spearman’s R = 0.77, P = 0.001) in healthy controls. Finally, ANX776 was found to be safe and well-tolerated with no serious adverse events, and a short half-life (10–36 min). This proof-of-concept study demonstrates that retinal cell apoptosis can be identified in the human retina with increased levels of activity in glaucomatous neurodegenerative disease. To our knowledge, this is the first time individual neuronal apoptosis has been visualized in vivo in humans and is the first demonstration of detection of individual apoptotic cells in a neurodegenerative disease. Furthermore, our results suggest the level of apoptosis (‘DARC count’) is predictive of disease activity, indicating the potential of DARC as a surrogate marker. Although further trials are clearly needed, this study validates experimental findings supporting the use of DARC as a method of detection and monitoring of patients with glaucomatous neurodegeneration, where retinal ganglion cell apoptosis is an established process and where there is a real need for tools to non-invasively assess treatment efficacy. PMID:28449038
Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization
Marai, G. Elisabeta
2018-01-01
Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550
Good expert knowledge, small scope.
Mayer, Horst
2014-01-01
During many years of occupational stress research, mostly within the German governmental program for "Humanization of Work Life'', remarkable deficits concerning visual work were seen, the most striking being the lack of cooperation between the different experts. With regard to this article hard arguments and ideas for solutions had to be found. A pilot study in 21 enterprises was realized (1602 employees with different visual work tasks). A test set of screening parameters (visual acuity, refraction, phoria, binocular cooperation and efficiency, accommodation range and color vision) were measured. The glasses and/or contact lenses worn were registered and the visual tasks analyzed. In work at visual display units (VDU) the eye movements were recorded and standardized questionnaires were given (health, stress, visual work situation). Because of the heterogeneity of the sample only simple statistics were applied: in groups of different visual work the complaints, symptoms, hassles and uplifts were clustered (SAS software) and correlated with the results of the visual tests. Later a special project in 8 companies (676 employees) was carried out. The results were published in [14]. Discomfort and asthenopic symptoms could be seen as an interaction of the combination of tasks and working conditions with the clusters of individual functionalisms, frequently originating in postural compromises. Mainly three causes for stress could be identified: 1. demands inadequate with regard to intensity, resolution, amount and/or time structure; 2. prevention of elementary perceptive needs; 3. entire use of partial capacities of the visual organ. Symptoms also were correlated with heteronomy. Other findings: influence of adaptation/accommodation ratio, the distracting role of attractors, especially in multitasking jobs; influence of high luminance differences. Dry eyes were very common, they could be attributed to a high screen position, low light, monotonous tasks and office climate. For some parameters a diurnal rhythm could be identified. Nowhere special programs for ageing employees were found: the right glasses; retinal problems and signs of destabilization of vision. In all enterprises, the ergophthalmological and visual ergonomic knowledge of the occupational physicians was poor, visual ergonomists were not available and there was only very poor cooperation with ophthalmologists and optometrists, the first of whom additionally had not much knowledge of modern work.
Denion, Eric; Hitier, Martin; Levieil, Eric; Mouriaux, Frédéric
2015-01-01
While convergent, the human orbit differs from that of non-human apes in that its lateral orbital margin is significantly more rearward. This rearward position does not obstruct the additional visual field gained through eye motion. This additional visual field is therefore considered to be wider in humans than in non-human apes. A mathematical model was designed to quantify this difference. The mathematical model is based on published computed tomography data in the human neuro-ocular plane (NOP) and on additional anatomical data from 100 human skulls and 120 non-human ape skulls (30 gibbons; 30 chimpanzees / bonobos; 30 orangutans; 30 gorillas). It is used to calculate temporal visual field eccentricity values in the NOP first in the primary position of gaze then for any eyeball rotation value in abduction up to 45° and any lateral orbital margin position between 85° and 115° relative to the sagittal plane. By varying the lateral orbital margin position, the human orbit can be made “non-human ape-like”. In the Pan-like orbit, the orbital margin position (98.7°) was closest to the human orbit (107.1°). This modest 8.4° difference resulted in a large 21.1° difference in maximum lateral visual field eccentricity with eyeball abduction (Pan-like: 115°; human: 136.1°). PMID:26190625
ERIC Educational Resources Information Center
Genovesi, Jacqueline Sue
2011-01-01
The earth is in an environmental crisis that can only be addressed by changing human conservation attitudes. People must have the scientific knowledge to make informed decisions. Research identifying new promising practices, for the use of live animals that incorporate new theories of learning and factors proven to impact learning, is critical. …
Visual analytics of geo-social interaction patterns for epidemic control.
Luo, Wei
2016-08-10
Human interaction and population mobility determine the spatio-temporal course of the spread of an airborne disease. This research views such spreads as geo-social interaction problems, because population mobility connects different groups of people over geographical locations via which the viruses transmit. Previous research argued that geo-social interaction patterns identified from population movement data can provide great potential in designing effective pandemic mitigation. However, little work has been done to examine the effectiveness of designing control strategies taking into account geo-social interaction patterns. To address this gap, this research proposes a new framework for effective disease control; specifically this framework proposes that disease control strategies should start from identifying geo-social interaction patterns, designing effective control measures accordingly, and evaluating the efficacy of different control measures. This framework is used to structure design of a new visual analytic tool that consists of three components: a reorderable matrix for geo-social mixing patterns, agent-based epidemic models, and combined visualization methods. With real world human interaction data in a French primary school as a proof of concept, this research compares the efficacy of vaccination strategies between the spatial-social interaction patterns and the whole areas. The simulation results show that locally targeted vaccination has the potential to keep infection to a small number and prevent spread to other regions. At some small probability, the local control strategies will fail; in these cases other control strategies will be needed. This research further explores the impact of varying spatial-social scales on the success of local vaccination strategies. The results show that a proper spatial-social scale can help achieve the best control efficacy with a limited number of vaccines. The case study shows how GS-EpiViz does support the design and testing of advanced control scenarios in airborne disease (e.g., influenza). The geo-social patterns identified through exploring human interaction data can help target critical individuals, locations, and clusters of locations for disease control purposes. The varying spatial-social scales can help geographically and socially prioritize limited resources (e.g., vaccines).
Keating, Jane J.; Okusanya, Olugbenga T.; De Jesus, Elizabeth; Judy, Ryan; Jiang, Jack; Deshpande, Charuhas; Nie, Shuming; Low, Philip; Singhal, Sunil
2017-01-01
Purpose During lung surgery, identification of surgical margins is challenging. We hypothesized that molecular imaging with a fluorescent probe to pulmonary adenocarcinomas could enhance residual tumor during resection. Procedures Mice with flank tumors received a contrast agent targeting folate receptor alpha. Optimal dose and time of injection was established. Margin detection was compared using traditional methods versus molecular imaging. A pilot study was then performed in 3 humans with lung adenocarcinoma. Results The peak tumor-to background ratio (TBR) of murine tumors was 3.9. Fluorescence peaked at 2 hours and was not improved beyond 0.1 mg/kg. Traditional inspection identified 30% of mice with positive margins. Molecular imaging identified an additional 50% of residual tumor deposits (P<0.05). The fluorescent probe visually enhanced all human tumors with a mean TBR of 3.5. Conclusions Molecular imaging is an important adjunct to traditional inspection to identify surgical margins after tumor resection. PMID:26228697
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Saccadic Corollary Discharge Underlies Stable Visual Perception
Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.
2016-01-01
Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647
Estimation of bio-signal based on human motion for integrated visualization of daily-life.
Umetani, Tomohiro; Matsukawa, Tsuyoshi; Yokoyama, Kiyoko
2007-01-01
This paper describes a method for the estimation of bio-signals based on human motion in daily life for an integrated visualization system. The recent advancement of computers and measurement technology has facilitated the integrated visualization of bio-signals and human motion data. It is desirable to obtain a method to understand the activities of muscles based on human motion data and evaluate the change in physiological parameters according to human motion for visualization applications. We suppose that human motion is generated by the activities of muscles reflected from the brain to bio-signals such as electromyograms. This paper introduces a method for the estimation of bio-signals based on neural networks. This method can estimate the other physiological parameters based on the same procedure. The experimental results show the feasibility of the proposed method.
HEDD: Human Enhancer Disease Database
Wang, Zhen; Zhang, Quanwei; Zhang, Wen; Lin, Jhih-Rong; Cai, Ying; Mitra, Joydeep
2018-01-01
Abstract Enhancers, as specialized genomic cis-regulatory elements, activate transcription of their target genes and play an important role in pathogenesis of many human complex diseases. Despite recent systematic identification of them in the human genome, currently there is an urgent need for comprehensive annotation databases of human enhancers with a focus on their disease connections. In response, we built the Human Enhancer Disease Database (HEDD) to facilitate studies of enhancers and their potential roles in human complex diseases. HEDD currently provides comprehensive genomic information for ∼2.8 million human enhancers identified by ENCODE, FANTOM5 and RoadMap with disease association scores based on enhancer–gene and gene–disease connections. It also provides Web-based analytical tools to visualize enhancer networks and score enhancers given a set of selected genes in a specific gene network. HEDD is freely accessible at http://zdzlab.einstein.yu.edu/1/hedd.php. PMID:29077884
Visual Environments for CFD Research
NASA Technical Reports Server (NTRS)
Watson, Val; George, Michael W. (Technical Monitor)
1994-01-01
This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.
The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex.
Self, Matthew W; Peters, Judith C; Possel, Jessy K; Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C; Roelfsema, Pieter R
2016-03-01
Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons' receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex.
The Effects of Context and Attention on Spiking Activity in Human Early Visual Cortex
Reithler, Joel; Goebel, Rainer; Ris, Peterjan; Jeurissen, Danique; Reddy, Leila; Claus, Steven; Baayen, Johannes C.; Roelfsema, Pieter R.
2016-01-01
Here we report the first quantitative analysis of spiking activity in human early visual cortex. We recorded multi-unit activity from two electrodes in area V2/V3 of a human patient implanted with depth electrodes as part of her treatment for epilepsy. We observed well-localized multi-unit receptive fields with tunings for contrast, orientation, spatial frequency, and size, similar to those reported in the macaque. We also observed pronounced gamma oscillations in the local-field potential that could be used to estimate the underlying spiking response properties. Spiking responses were modulated by visual context and attention. We observed orientation-tuned surround suppression: responses were suppressed by image regions with a uniform orientation and enhanced by orientation contrast. Additionally, responses were enhanced on regions that perceptually segregated from the background, indicating that neurons in the human visual cortex are sensitive to figure-ground structure. Spiking responses were also modulated by object-based attention. When the patient mentally traced a curve through the neurons’ receptive fields, the accompanying shift of attention enhanced neuronal activity. These results demonstrate that the tuning properties of cells in the human early visual cortex are similar to those in the macaque and that responses can be modulated by both contextual factors and behavioral relevance. Our results, therefore, imply that the macaque visual system is an excellent model for the human visual cortex. PMID:27015604
Identification of a pathway for intelligible speech in the left temporal lobe
Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.
2017-01-01
Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443
Mladinich, C.
2010-01-01
Human disturbance is a leading ecosystem stressor. Human-induced modifications include transportation networks, areal disturbances due to resource extraction, and recreation activities. High-resolution imagery and object-oriented classification rather than pixel-based techniques have successfully identified roads, buildings, and other anthropogenic features. Three commercial, automated feature-extraction software packages (Visual Learning Systems' Feature Analyst, ENVI Feature Extraction, and Definiens Developer) were evaluated by comparing their ability to effectively detect the disturbed surface patterns from motorized vehicle traffic. Each package achieved overall accuracies in the 70% range, demonstrating the potential to map the surface patterns. The Definiens classification was more consistent and statistically valid. Copyright ?? 2010 by Bellwether Publishing, Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.
1980-01-01
The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.
Human Factors Assessment of Vibration Effects on Visual Performance During Launch
NASA Technical Reports Server (NTRS)
Holden, Kritina
2009-01-01
The Human Factors Assessment of Vibration Effects on Visual Performance During Launch (Visual Performance) investigation will determine visual performance limits during operational vibration and g-loads on the Space Shuttle, specifically through the determination of minimum readable font size during ascent using planned Orion display formats. Research Summary: The aim of the Human Factors Assessment of Vibration Effects on Visual Performance during Launch (Visual Performance) investigation is to provide supplementary data to that collected by the Thrust Oscillation Seat Detailed Technical Objective (DTO) 695 (Crew Seat DTO) which will measure seat acceleration and vibration from one flight deck and two middeck seats during ascent. While the Crew Seat DTO data alone are important in terms of providing a measure of vibration and g-loading, human performance data are required to fully interpret the operational consequences of the vibration values collected during Space Shuttle ascent. During launch, crewmembers will be requested to view placards with varying font sizes and indicate the minimum readable size. In combination with the Crew Seat DTO, the Visual Performance investigation will: Provide flight-validated evidence that will be used to establish vibration limits for visual performance during combined vibration and linear g-loading. o Provide flight data as inputs to ongoing ground-based simulations, which will further validate crew visual performance under vibration loading in a controlled environment. o Provide vibration and performance metrics to help validate procedures for ground tests and analyses of seats, suits, displays and controls, and human-in-the-loop performance.
Parahippocampal and retrosplenial contributions to human spatial navigation
Epstein, Russell A.
2010-01-01
Spatial navigation is a core cognitive ability in humans and animals. Neuroimaging studies have identified two functionally-defined brain regions that activate during navigational tasks and also during passive viewing of navigationally-relevant stimuli such as environmental scenes: the parahippocampal place area (PPA) and the retrosplenial complex (RSC). Recent findings indicate that the PPA and RSC play distinct and complementary roles in spatial navigation, with the PPA more concerned with representation of the local visual scene and RSC more concerned with situating the scene within the broader spatial environment. These findings are a first step towards understanding the separate components of the cortical network that mediates spatial navigation in humans. PMID:18760955
Asymmetric latent semantic indexing for gene expression experiments visualization.
González, Javier; Muñoz, Alberto; Martos, Gabriel
2016-08-01
We propose a new method to visualize gene expression experiments inspired by the latent semantic indexing technique originally proposed in the textual analysis context. By using the correspondence word-gene document-experiment, we define an asymmetric similarity measure of association for genes that accounts for potential hierarchies in the data, the key to obtain meaningful gene mappings. We use the polar decomposition to obtain the sources of asymmetry of the similarity matrix, which are later combined with previous knowledge. Genetic classes of genes are identified by means of a mixture model applied in the genes latent space. We describe the steps of the procedure and we show its utility in the Human Cancer dataset.
Overview of Human-Centric Space Situational Awareness Science and Technology
2012-09-01
AGI), the developers of Satellite Tool Kit ( STK ), has provided demonstrations of innovative SSA visualization concepts that take advantage of the...needs inherent with SSA. RH has conducted CTAs and developed work-centered human-computer interfaces, visualizations , and collaboration technologies...all end users. RH’s Battlespace Visualization Branch researches methods to exploit the visual channel primarily to improve decision making and
ERIC Educational Resources Information Center
Wilkinson, Krista M.; Light, Janice
2011-01-01
Purpose: Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs.…
Pursey, Kirrilly M.; Stanwell, Peter; Callister, Robert J.; Brain, Katherine; Collins, Clare E.; Burrows, Tracy L.
2014-01-01
Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies. PMID:25988110
Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L
2014-01-01
Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.
Sood, Mariam R; Sereno, Martin I
2016-08-01
Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.
Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint
2017-09-13
GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.
Tracking the allocation of attention using human pupillary oscillations
Naber, Marnix; Alvarez, George A.; Nakayama, Ken
2013-01-01
The muscles that control the pupil are richly innervated by the autonomic nervous system. While there are central pathways that drive pupil dilations in relation to arousal, there is no anatomical evidence that cortical centers involved with visual selective attention innervate the pupil. In this study, we show that such connections must exist. Specifically, we demonstrate a novel Pupil Frequency Tagging (PFT) method, where oscillatory changes in stimulus brightness over time are mirrored by pupil constrictions and dilations. We find that the luminance–induced pupil oscillations are enhanced when covert attention is directed to the flicker stimulus and when targets are correctly detected in an attentional tracking task. These results suggest that the amplitudes of pupil responses closely follow the allocation of focal visual attention and the encoding of stimuli. PFT provides a new opportunity to study top–down visual attention itself as well as identifying the pathways and mechanisms that support this unexpected phenomenon. PMID:24368904
NASA Astrophysics Data System (ADS)
Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao
2018-01-01
Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.
Evolutionary relevance facilitates visual information processing.
Jackson, Russell E; Calvillo, Dusti P
2013-11-03
Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.
Relating Standardized Visual Perception Measures to Simulator Visual System Performance
NASA Technical Reports Server (NTRS)
Kaiser, Mary K.; Sweet, Barbara T.
2013-01-01
Human vision is quantified through the use of standardized clinical vision measurements. These measurements typically include visual acuity (near and far), contrast sensitivity, color vision, stereopsis (a.k.a. stereo acuity), and visual field periphery. Simulator visual system performance is specified in terms such as brightness, contrast, color depth, color gamut, gamma, resolution, and field-of-view. How do these simulator performance characteristics relate to the perceptual experience of the pilot in the simulator? In this paper, visual acuity and contrast sensitivity will be related to simulator visual system resolution, contrast, and dynamic range; similarly, color vision will be related to color depth/color gamut. Finally, we will consider how some characteristics of human vision not typically included in current clinical assessments could be used to better inform simulator requirements (e.g., relating dynamic characteristics of human vision to update rate and other temporal display characteristics).
Art, Illusion and the Visual System.
ERIC Educational Resources Information Center
Livingstone, Margaret S.
1988-01-01
Describes the three part system of human vision. Explores the anatomical arrangement of the vision system from the eyes to the brain. Traces the path of various visual signals to their interpretations by the brain. Discusses human visual perception and its implications in art and design. (CW)
Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency
Sripati, Arun P.; Olson, Carl R.
2010-01-01
Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054
Attention Priority Map of Face Images in Human Early Visual Cortex.
Mo, Ce; He, Dongjun; Fang, Fang
2018-01-03
Attention priority maps are topographic representations that are used for attention selection and guidance of task-related behavior during visual processing. Previous studies have identified attention priority maps of simple artificial stimuli in multiple cortical and subcortical areas, but investigating neural correlates of priority maps of natural stimuli is complicated by the complexity of their spatial structure and the difficulty of behaviorally characterizing their priority map. To overcome these challenges, we reconstructed the topographic representations of upright/inverted face images from fMRI BOLD signals in human early visual areas primary visual cortex (V1) and the extrastriate cortex (V2 and V3) based on a voxelwise population receptive field model. We characterized the priority map behaviorally as the first saccadic eye movement pattern when subjects performed a face-matching task relative to the condition in which subjects performed a phase-scrambled face-matching task. We found that the differential first saccadic eye movement pattern between upright/inverted and scrambled faces could be predicted from the reconstructed topographic representations in V1-V3 in humans of either sex. The coupling between the reconstructed representation and the eye movement pattern increased from V1 to V2/3 for the upright faces, whereas no such effect was found for the inverted faces. Moreover, face inversion modulated the coupling in V2/3, but not in V1. Our findings provide new evidence for priority maps of natural stimuli in early visual areas and extend traditional attention priority map theories by revealing another critical factor that affects priority maps in extrastriate cortex in addition to physical salience and task goal relevance: image configuration. SIGNIFICANCE STATEMENT Prominent theories of attention posit that attention sampling of visual information is mediated by a series of interacting topographic representations of visual space known as attention priority maps. Until now, neural evidence of attention priority maps has been limited to studies involving simple artificial stimuli and much remains unknown about the neural correlates of priority maps of natural stimuli. Here, we show that attention priority maps of face stimuli could be found in primary visual cortex (V1) and the extrastriate cortex (V2 and V3). Moreover, representations in extrastriate visual areas are strongly modulated by image configuration. These findings extend our understanding of attention priority maps significantly by showing that they are modulated, not only by physical salience and task-goal relevance, but also by the configuration of stimuli images. Copyright © 2018 the authors 0270-6474/18/380149-09$15.00/0.
Animated analysis of geoscientific datasets: An interactive graphical application
NASA Astrophysics Data System (ADS)
Morse, Peter; Reading, Anya; Lueg, Christopher
2017-12-01
Geoscientists are required to analyze and draw conclusions from increasingly large volumes of data. There is a need to recognise and characterise features and changing patterns of Earth observables within such large datasets. It is also necessary to identify significant subsets of the data for more detailed analysis. We present an innovative, interactive software tool and workflow to visualise, characterise, sample and tag large geoscientific datasets from both local and cloud-based repositories. It uses an animated interface and human-computer interaction to utilise the capacity of human expert observers to identify features via enhanced visual analytics. 'Tagger' enables users to analyze datasets that are too large in volume to be drawn legibly on a reasonable number of single static plots. Users interact with the moving graphical display, tagging data ranges of interest for subsequent attention. The tool provides a rapid pre-pass process using fast GPU-based OpenGL graphics and data-handling and is coded in the Quartz Composer visual programing language (VPL) on Mac OSX. It makes use of interoperable data formats, and cloud-based (or local) data storage and compute. In a case study, Tagger was used to characterise a decade (2000-2009) of data recorded by the Cape Sorell Waverider Buoy, located approximately 10 km off the west coast of Tasmania, Australia. These data serve as a proxy for the understanding of Southern Ocean storminess, which has both local and global implications. This example shows use of the tool to identify and characterise 4 different types of storm and non-storm events during this time. Events characterised in this way are compared with conventional analysis, noting advantages and limitations of data analysis using animation and human interaction. Tagger provides a new ability to make use of humans as feature detectors in computer-based analysis of large-volume geosciences and other data.
Linking pain and the body: neural correlates of visually induced analgesia.
Longo, Matthew R; Iannetti, Gian Domenico; Mancini, Flavia; Driver, Jon; Haggard, Patrick
2012-02-22
The visual context of seeing the body can reduce the experience of acute pain, producing a multisensory analgesia. Here we investigated the neural correlates of this "visually induced analgesia" using fMRI. We induced acute pain with an infrared laser while human participants looked either at their stimulated right hand or at another object. Behavioral results confirmed the expected analgesic effect of seeing the body, while fMRI results revealed an associated reduction of laser-induced activity in ipsilateral primary somatosensory cortex (SI) and contralateral operculoinsular cortex during the visual context of seeing the body. We further identified two known cortical networks activated by sensory stimulation: (1) a set of brain areas consistently activated by painful stimuli (the so-called "pain matrix"), and (2) an extensive set of posterior brain areas activated by the visual perception of the body ("visual body network"). Connectivity analyses via psychophysiological interactions revealed that the visual context of seeing the body increased effective connectivity (i.e., functional coupling) between posterior parietal nodes of the visual body network and the purported pain matrix. Increased connectivity with these posterior parietal nodes was seen for several pain-related regions, including somatosensory area SII, anterior and posterior insula, and anterior cingulate cortex. These findings suggest that visually induced analgesia does not involve an overall reduction of the cortical response elicited by laser stimulation, but is consequent to the interplay between the brain's pain network and a posterior network for body perception, resulting in modulation of the experience of pain.
Degraded attentional modulation of cortical neural populations in strabismic amblyopia
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI–informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye. PMID:26885628
Degraded attentional modulation of cortical neural populations in strabismic amblyopia.
Hou, Chuan; Kim, Yee-Joon; Lai, Xin Jie; Verghese, Preeti
2016-01-01
Behavioral studies have reported reduced spatial attention in amblyopia, a developmental disorder of spatial vision. However, the neural populations in the visual cortex linked with these behavioral spatial attention deficits have not been identified. Here, we use functional MRI-informed electroencephalography source imaging to measure the effect of attention on neural population activity in the visual cortex of human adult strabismic amblyopes who were stereoblind. We show that compared with controls, the modulatory effects of selective visual attention on the input from the amblyopic eye are substantially reduced in the primary visual cortex (V1) as well as in extrastriate visual areas hV4 and hMT+. Degraded attentional modulation is also found in the normal-acuity fellow eye in areas hV4 and hMT+ but not in V1. These results provide electrophysiological evidence that abnormal binocular input during a developmental critical period may impact cortical connections between the visual cortex and higher level cortices beyond the known amblyopic losses in V1 and V2, suggesting that a deficit of attentional modulation in the visual cortex is an important component of the functional impairment in amblyopia. Furthermore, we find that degraded attentional modulation in V1 is correlated with the magnitude of interocular suppression and the depth of amblyopia. These results support the view that the visual suppression often seen in strabismic amblyopia might be a form of attentional neglect of the visual input to the amblyopic eye.
Analysis of Craniocardiac Malformations in Xenopus using Optical Coherence Tomography
Deniz, Engin; Jonas, Stephan; Hooper, Michael; N. Griffin, John; Choma, Michael A.; Khokha, Mustafa K.
2017-01-01
Birth defects affect 3% of children in the United States. Among the birth defects, congenital heart disease and craniofacial malformations are major causes of mortality and morbidity. Unfortunately, the genetic mechanisms underlying craniocardiac malformations remain largely uncharacterized. To address this, human genomic studies are identifying sequence variations in patients, resulting in numerous candidate genes. However, the molecular mechanisms of pathogenesis for most candidate genes are unknown. Therefore, there is a need for functional analyses in rapid and efficient animal models of human disease. Here, we coupled the frog Xenopus tropicalis with Optical Coherence Tomography (OCT) to create a fast and efficient system for testing craniocardiac candidate genes. OCT can image cross-sections of microscopic structures in vivo at resolutions approaching histology. Here, we identify optimal OCT imaging planes to visualize and quantitate Xenopus heart and facial structures establishing normative data. Next we evaluate known human congenital heart diseases: cardiomyopathy and heterotaxy. Finally, we examine craniofacial defects by a known human teratogen, cyclopamine. We recapitulate human phenotypes readily and quantify the functional and structural defects. Using this approach, we can quickly test human craniocardiac candidate genes for phenocopy as a critical first step towards understanding disease mechanisms of the candidate genes. PMID:28195132
Spatial organization of neurons in the frontal pole sets humans apart from great apes.
Semendeferi, Katerina; Teffer, Kate; Buxhoeveden, Dan P; Park, Min S; Bludau, Sebastian; Amunts, Katrin; Travis, Katie; Buckwalter, Joseph
2011-07-01
Few morphological differences have been identified so far that distinguish the human brain from the brains of our closest relatives, the apes. Comparative analyses of the spatial organization of cortical neurons, including minicolumns, can aid our understanding of the functionally relevant aspects of microcircuitry. We measured horizontal spacing distance and gray-level ratio in layer III of 4 regions of human and ape cortex in all 6 living hominoid species: frontal pole (Brodmann area [BA] 10), and primary motor (BA 4), primary somatosensory (BA 3), and primary visual cortex (BA 17). Our results identified significant differences between humans and apes in the frontal pole (BA 10). Within the human brain, there were also significant differences between the frontal pole and 2 of the 3 regions studied (BA 3 and BA 17). Differences between BA 10 and BA 4 were present but did not reach significance. These findings in combination with earlier findings on BA 44 and BA 45 suggest that human brain evolution was likely characterized by an increase in the number and width of minicolumns and the space available for interconnectivity between neurons in the frontal lobe, especially the prefrontal cortex.
Human Factors Engineering Program Review Model
2004-02-01
Institute, 1993). ANSI HFS-100: American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (American National... American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI HFS-100-1988). Santa Monica, California
Audiovisual Temporal Processing and Synchrony Perception in the Rat.
Schormans, Ashley L; Scott, Kaela E; Vo, Albert M Q; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L
2016-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer's ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats ( n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats ( n = 7) perceived the synchronous audiovisual stimuli to be "visual first" for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20-40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level.
Audiovisual Temporal Processing and Synchrony Perception in the Rat
Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.
2017-01-01
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately, given that our behavioral and electrophysiological results were consistent with studies conducted on human participants and previous recordings made in multisensory brain regions of different species, we suggest that the rat represents an effective model for studying audiovisual temporal synchrony at both the neuronal and perceptual level. PMID:28119580
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.
Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, secondmore » identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.« less
Digital Images and Human Vision
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)
1997-01-01
Processing of digital images destined for visual consumption raises many interesting questions regarding human visual sensitivity. This talk will survey some of these questions, including some that have been answered and some that have not. There will be an emphasis upon visual masking, and a distinction will be drawn between masking due to contrast gain control processes, and due to processes such as hypothesis testing, pattern recognition, and visual search.
Toxoplasma Modulates Signature Pathways of Human Epilepsy, Neurodegeneration & Cancer.
Ngô, Huân M; Zhou, Ying; Lorenzi, Hernan; Wang, Kai; Kim, Taek-Kyun; Zhou, Yong; El Bissati, Kamal; Mui, Ernest; Fraczek, Laura; Rajagopala, Seesandra V; Roberts, Craig W; Henriquez, Fiona L; Montpetit, Alexandre; Blackwell, Jenefer M; Jamieson, Sarra E; Wheeler, Kelsey; Begeman, Ian J; Naranjo-Galvis, Carlos; Alliey-Rodriguez, Ney; Davis, Roderick G; Soroceanu, Liliana; Cobbs, Charles; Steindler, Dennis A; Boyer, Kenneth; Noble, A Gwendolyn; Swisher, Charles N; Heydemann, Peter T; Rabiah, Peter; Withers, Shawn; Soteropoulos, Patricia; Hood, Leroy; McLeod, Rima
2017-09-13
One third of humans are infected lifelong with the brain-dwelling, protozoan parasite, Toxoplasma gondii. Approximately fifteen million of these have congenital toxoplasmosis. Although neurobehavioral disease is associated with seropositivity, causality is unproven. To better understand what this parasite does to human brains, we performed a comprehensive systems analysis of the infected brain: We identified susceptibility genes for congenital toxoplasmosis in our cohort of infected humans and found these genes are expressed in human brain. Transcriptomic and quantitative proteomic analyses of infected human, primary, neuronal stem and monocytic cells revealed effects on neurodevelopment and plasticity in neural, immune, and endocrine networks. These findings were supported by identification of protein and miRNA biomarkers in sera of ill children reflecting brain damage and T. gondii infection. These data were deconvoluted using three systems biology approaches: "Orbital-deconvolution" elucidated upstream, regulatory pathways interconnecting human susceptibility genes, biomarkers, proteomes, and transcriptomes. "Cluster-deconvolution" revealed visual protein-protein interaction clusters involved in processes affecting brain functions and circuitry, including lipid metabolism, leukocyte migration and olfaction. Finally, "disease-deconvolution" identified associations between the parasite-brain interactions and epilepsy, movement disorders, Alzheimer's disease, and cancer. This "reconstruction-deconvolution" logic provides templates of progenitor cells' potentiating effects, and components affecting human brain parasitism and diseases.
Resolving the organization of the third tier visual cortex in primates: a hypothesis-based approach.
Angelucci, Alessandra; Rosa, Marcello G P
2015-01-01
As highlighted by several contributions to this special issue, there is still ongoing debate about the number, exact location, and boundaries of the visual areas located in cortex immediately rostral to the second visual area (V2), i.e., the "third tier" visual cortex, in primates. In this review, we provide a historical overview of the main ideas that have led to four models of third tier cortex organization, which are at the center of today's debate. We formulate specific predictions of these models, and compare these predictions with experimental evidence obtained primarily in New World primates. From this analysis, we conclude that only one of these models (the "multiple-areas" model) can accommodate the breadth of available experimental evidence. According to this model, most of the third tier cortex in New World primates is occupied by two distinct areas, both representing the full contralateral visual quadrant: the dorsomedial area (DM), restricted to the dorsal half of the third visual complex, and the ventrolateral posterior area (VLP), occupying its ventral half and a substantial fraction of its dorsal half. DM belongs to the dorsal stream of visual processing, and overlaps with macaque parietooccipital (PO) area (or V6), whereas VLP belongs to the ventral stream and overlaps considerably with area V3 proposed by others. In contrast, there is substantial evidence that is inconsistent with the concept of a single elongated area V3 lining much of V2. We also review the experimental evidence from macaque monkey and humans, and propose that, once the data are interpreted within an evolutionary-developmental context, these species share a homologous (but not necessarily identical) organization of the third tier cortex as that observed in New World monkeys. Finally, we identify outstanding issues, and propose experiments to resolve them, highlighting in particular the need for more extensive, hypothesis-driven investigations in macaque and humans.
A comparative psychophysical approach to visual perception in primates.
Matsuno, Toyomi; Fujita, Kazuo
2009-04-01
Studies on the visual processing of primates, which have well developed visual systems, provide essential information about the perceptual bases of their higher-order cognitive abilities. Although the mechanisms underlying visual processing are largely shared between human and nonhuman primates, differences have also been reported. In this article, we review psychophysical investigations comparing the basic visual processing that operates in human and nonhuman species, and discuss the future contributions potentially deriving from such comparative psychophysical approaches to primate minds.
Man-machine interface requirements - advanced technology
NASA Technical Reports Server (NTRS)
Remington, R. W.; Wiener, E. L.
1984-01-01
Research issues and areas are identified where increased understanding of the human operator and the interaction between the operator and the avionics could lead to improvements in the performance of current and proposed helicopters. Both current and advanced helicopter systems and avionics are considered. Areas critical to man-machine interface requirements include: (1) artificial intelligence; (2) visual displays; (3) voice technology; (4) cockpit integration; and (5) pilot work loads and performance.
Causes and Prevention of Laparoscopic Bile Duct Injuries
Way, Lawrence W.; Stewart, Lygia; Gantert, Walter; Liu, Kingsway; Lee, Crystine M.; Whang, Karen; Hunter, John G.
2003-01-01
Objective To apply human performance concepts in an attempt to understand the causes of and prevent laparoscopic bile duct injury. Summary Background Data Powerful conceptual advances have been made in understanding the nature and limits of human performance. Applying these findings in high-risk activities, such as commercial aviation, has allowed the work environment to be restructured to substantially reduce human error. Methods The authors analyzed 252 laparoscopic bile duct injuries according to the principles of the cognitive science of visual perception, judgment, and human error. The injury distribution was class I, 7%; class II, 22%; class III, 61%; and class IV, 10%. The data included operative radiographs, clinical records, and 22 videotapes of original operations. Results The primary cause of error in 97% of cases was a visual perceptual illusion. Faults in technical skill were present in only 3% of injuries. Knowledge and judgment errors were contributory but not primary. Sixty-four injuries (25%) were recognized at the index operation; the surgeon identified the problem early enough to limit the injury in only 15 (6%). In class III injuries the common duct, erroneously believed to be the cystic duct, was deliberately cut. This stemmed from an illusion of object form due to a specific uncommon configuration of the structures and the heuristic nature (unconscious assumptions) of human visual perception. The videotapes showed the persuasiveness of the illusion, and many operative reports described the operation as routine. Class II injuries resulted from a dissection too close to the common hepatic duct. Fundamentally an illusion, it was contributed to in some instances by working too deep in the triangle of Calot. Conclusions These data show that errors leading to laparoscopic bile duct injuries stem principally from misperception, not errors of skill, knowledge, or judgment. The misperception was so compelling that in most cases the surgeon did not recognize a problem. Even when irregularities were identified, corrective feedback did not occur, which is characteristic of human thinking under firmly held assumptions. These findings illustrate the complexity of human error in surgery while simultaneously providing insights. They demonstrate that automatically attributing technical complications to behavioral factors that rely on the assumption of control is likely to be wrong. Finally, this study shows that there are only a few points within laparoscopic cholecystectomy where the complication-causing errors occur, which suggests that focused training to heighten vigilance might be able to decrease the incidence of bile duct injury. PMID:12677139
Neuronal and oscillatory activity during reward processing in the human ventral striatum.
Lega, Bradley C; Kahana, Michael J; Jaggi, Jurg; Baltuch, Gordon H; Zaghloul, Kareem
2011-11-16
Accumulated evidence from animal studies implicates the ventral striatum in the processing of reward information. Recently, deep brain stimulation (DBS) surgery has enabled researchers to analyze neurophysiological recordings from humans engaged in reward tasks. We present data recorded from the human ventral striatum during deep brain stimulation surgery as a participant played a video game coupled to the receipt of visual reward images. To our knowledge, we identify the first instances of reward-sensitive single unit activity in the human ventral striatum. Local field potential data suggest that alpha oscillations are sensitive to positive feedback, whereas beta oscillations exhibit significantly higher power during unrewarded trials. We report evidence of alpha-gamma cross-frequency coupling that differentiates between positive and negative feedback. © 2011 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Schwaibold, M; Schöller, B; Penzel, T; Bolz, A
2001-05-01
We describe a novel approach to the problem of automated sleep stage recognition. The ARTISANA algorithm mimics the behaviour of a human expert visually scoring sleep stages (Rechtschaffen and Kales classification). It comprises a number of interacting components that imitate the stepwise approach of the human expert, and artificial intelligence components. On the basis of parameters extracted at 1-s intervals from the signal curves, artificial neural networks recognize the incidence of typical patterns, e.g. delta activity or K complexes. This is followed by a rule interpretation stage that identifies the sleep stage with the aid of a neuro-fuzzy system while taking account of the context. Validation studies based on the records of 8 patients with obstructive sleep apnoea have confirmed the potential of this approach. Further features of the system include the transparency of the decision-taking process, and the flexibility of the option for expanding the system to cover new patterns and criteria.
Visualization of aging-associated chromatin alterations with an engineered TALE system
Ren, Ruotong; Deng, Liping; Xue, Yanhong; Suzuki, Keiichiro; Zhang, Weiqi; Yu, Yang; Wu, Jun; Sun, Liang; Gong, Xiaojun; Luan, Huiqin; Yang, Fan; Ju, Zhenyu; Ren, Xiaoqing; Wang, Si; Tang, Hong; Geng, Lingling; Zhang, Weizhou; Li, Jian; Qiao, Jie; Xu, Tao; Qu, Jing; Liu, Guang-Hui
2017-01-01
Visualization of specific genomic loci in live cells is a prerequisite for the investigation of dynamic changes in chromatin architecture during diverse biological processes, such as cellular aging. However, current precision genomic imaging methods are hampered by the lack of fluorescent probes with high specificity and signal-to-noise contrast. We find that conventional transcription activator-like effectors (TALEs) tend to form protein aggregates, thereby compromising their performance in imaging applications. Through screening, we found that fusing thioredoxin with TALEs prevented aggregate formation, unlocking the full power of TALE-based genomic imaging. Using thioredoxin-fused TALEs (TTALEs), we achieved high-quality imaging at various genomic loci and observed aging-associated (epi) genomic alterations at telomeres and centromeres in human and mouse premature aging models. Importantly, we identified attrition of ribosomal DNA repeats as a molecular marker for human aging. Our study establishes a simple and robust imaging method for precisely monitoring chromatin dynamics in vitro and in vivo. PMID:28139645
Reategui, Camille; Costa, Bruna Karen de Sousa; da Fonseca, Caio Queiroz; da Silva, Luana; Morya, Edgard
2017-01-01
Autism spectrum disorder (ASD) is a neuropsychiatric disorder characterized by the impairment in the social reciprocity, interaction/language, and behavior, with stereotypes and signs of sensory function deficits. Electroencephalography (EEG) is a well-established and noninvasive tool for neurophysiological characterization and monitoring of the brain electrical activity, able to identify abnormalities related to frequency range, connectivity, and lateralization of brain functions. This research aims to evidence quantitative differences in the frequency spectrum pattern between EEG signals of children with and without ASD during visualization of human faces in three different expressions: neutral, happy, and angry. Quantitative clinical evaluations, neuropsychological evaluation, and EEG of children with and without ASD were analyzed paired by age and gender. The results showed stronger activation in higher frequencies (above 30 Hz) in frontal, central, parietal, and occipital regions in the ASD group. This pattern of activation may correlate with developmental characteristics in the children with ASD. PMID:29018811
Simultaneous chromatic and luminance human electroretinogram responses.
Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan
2012-07-01
The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats' ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing.
Stocco, Andrea; Prat, Chantel S; Losey, Darby M; Cronin, Jeneva A; Wu, Joseph; Abernethy, Justin A; Rao, Rajesh P N
2015-01-01
We present, to our knowledge, the first demonstration that a non-invasive brain-to-brain interface (BBI) can be used to allow one human to guess what is on the mind of another human through an interactive question-and-answering paradigm similar to the "20 Questions" game. As in previous non-invasive BBI studies in humans, our interface uses electroencephalography (EEG) to detect specific patterns of brain activity from one participant (the "respondent"), and transcranial magnetic stimulation (TMS) to deliver functionally-relevant information to the brain of a second participant (the "inquirer"). Our results extend previous BBI research by (1) using stimulation of the visual cortex to convey visual stimuli that are privately experienced and consciously perceived by the inquirer; (2) exploiting real-time rather than off-line communication of information from one brain to another; and (3) employing an interactive task, in which the inquirer and respondent must exchange information bi-directionally to collaboratively solve the task. The results demonstrate that using the BBI, ten participants (five inquirer-respondent pairs) can successfully identify a "mystery item" using a true/false question-answering protocol similar to the "20 Questions" game, with high levels of accuracy that are significantly greater than a control condition in which participants were connected through a sham BBI.
Paintings, photographs, and computer graphics are calculated appearances
NASA Astrophysics Data System (ADS)
McCann, John
2012-03-01
Painters reproduce the appearances they see, or visualize. The entire human visual system is the first part of that process, providing extensive spatial processing. Painters have used spatial techniques since the Renaissance to render HDR scenes. Silver halide photography responds to the light falling on single film pixels. Film can only mimic the retinal response of the cones at the start of the visual process. Film cannot mimic the spatial processing in humans. Digital image processing can. This talk studies three dramatic visual illusions and uses the spatial mechanisms found in human vision to interpret their appearances.
Characterization of Visual Scanning Patterns in Air Traffic Control
McClung, Sarah N.; Kang, Ziho
2016-01-01
Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process. PMID:27239190
Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location
Kanwisher, Nancy
2012-01-01
The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434
Characterization of Visual Scanning Patterns in Air Traffic Control.
McClung, Sarah N; Kang, Ziho
2016-01-01
Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.
Gender differences in identifying emotions from auditory and visual stimuli.
Waaramaa, Teija
2017-12-01
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Synthesizing 3D Surfaces from Parameterized Strip Charts
NASA Technical Reports Server (NTRS)
Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri
2004-01-01
We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.
Denny, Lynette; Kuhn, Louise; Hu, Chih-Chi; Tsai, Wei-Yann; Wright, Thomas C
2010-10-20
Screen-and-treat approaches to cervical cancer prevention are an attractive option for low-resource settings, but data on their long-term efficacy are lacking. We evaluated the efficacy of two screen-and-treat approaches through 36 months of follow-up in a randomized trial. A total of 6637 unscreened South African women aged 35-65 years who were tested for the presence of high-risk human papillomavirus (HPV) DNA in cervical samples underwent visual inspection of the cervix using acetic acid staining and HIV serotesting. Of these, 6555 were randomly assigned to three study arms: 1) HPV-and-treat, in which all women with a positive HPV DNA test result underwent cryotherapy; 2) visual inspection-and-treat, in which all women with a positive visual inspection test result underwent cryotherapy; or 3) control, in which further evaluation or treatment was delayed for 6 months. All women underwent colposcopy with biopsy at 6 months. All women who were HPV DNA- or visual inspection-positive at enrollment, and a subset of all other women had extended follow-up to 36 months (n = 3639) with yearly colposcopy. The endpoint-cervical intraepithelial neoplasia grade 2 or worse (CIN2+)-was analyzed using actuarial life-table methods. All statistical tests were two-sided. After 36 months, there was a sustained statistically significant decrease in the cumulative detection of CIN2+ in the HPV-and-treat arm compared with the control arm (1.5% vs 5.6%, difference = 4.1%, 95% confidence interval [CI] = 2.8% to 5.3%, P < .001). The difference in the cumulative detection of CIN2+ in the visual inspection-and-treat arm compared with the control was less (3.8% vs 5.6%, difference = 1.8%, 95% CI = 0.4% to 3.2%, P = .002). Incident cases of CIN2+ (identified more than 12 months after enrollment) were less common in the HPV-and-treat arm (0.3%, 95% CI = 0.05% to 1.02%) than in the control (1.0%, 95% CI = 0.5% to 1.7%) or visual inspection-and-treat (1.3%, 95% CI = 0.8% to 2.1%) arms. In this trial, a screen-and-treat approach using HPV DNA testing identified and treated prevalent cases of CIN2+ and appeared to reduce the number of incident cases of CIN2+ that developed more than 12 months after cryotherapy.
Modification of visual function by early visual experience.
Blakemore, C
1976-07-01
Physiological experiments, involving recording from the visual cortex in young kittens and monkeys, have given new insight into human developmental disorders. In the visual cortex of normal cats and monkeys most neurones are selectively sensitive to the orientation of moving edges and they receive very similar signals from both eyes. Even in very young kittens without visual experience, most neurones are binocularly driven and a small proportion of them are genuinely orientation selective. There is no passive maturation of the system in the absence of visual experience, but even very brief exposure to patterned images produces rapid emergence of the adult organization. These results are compared to observations on humans who have "recovered" from early blindness. Covering one eye in a kitten or a monkey, during a sensitive period early in life, produces a virtually complete loss of input from that eye in the cortex. These results can be correlated with the production of "stimulus deprivation amblyopia" in infants who have had one eye patched. Induction of a strabismus causes a loss of binocularity in the visual cortex, and in humans it leads to a loss of stereoscopic vision and binocular fusion. Exposing kittens to lines of one orientation modifies the preferred orientations of cortical cells and there is an analogous "meridional amblyopia" in astigmatic humans. The existence of a sensitive period in human vision is discussed, as well as the possibility of designing remedial and preventive treatments for human developmental disorders.
Operator vision aids for space teleoperation assembly and servicing
NASA Technical Reports Server (NTRS)
Brooks, Thurston L.; Ince, Ilhan; Lee, Greg
1992-01-01
This paper investigates concepts for visual operator aids required for effective telerobotic control. Operator visual aids, as defined here, mean any operational enhancement that improves man-machine control through the visual system. These concepts were derived as part of a study of vision issues for space teleoperation. Extensive literature on teleoperation, robotics, and human factors was surveyed to definitively specify appropriate requirements. This paper presents these visual aids in three general categories of camera/lighting functions, display enhancements, and operator cues. In the area of camera/lighting functions concepts are discussed for: (1) automatic end effector or task tracking; (2) novel camera designs; (3) computer-generated virtual camera views; (4) computer assisted camera/lighting placement; and (5) voice control. In the technology area of display aids, concepts are presented for: (1) zone displays, such as imminent collision or indexing limits; (2) predictive displays for temporal and spatial location; (3) stimulus-response reconciliation displays; (4) graphical display of depth cues such as 2-D symbolic depth, virtual views, and perspective depth; and (5) view enhancements through image processing and symbolic representations. Finally, operator visual cues (e.g., targets) that help identify size, distance, shape, orientation and location are discussed.
Zhang, Chao; Gao, Yang; Liu, Jiaojiao; Xue, Zhe; Lu, Yan; Deng, Lian; Tian, Lei; Feng, Qidi
2018-01-01
Abstract There are a growing number of studies focusing on delineating genetic variations that are associated with complex human traits and diseases due to recent advances in next-generation sequencing technologies. However, identifying and prioritizing disease-associated causal variants relies on understanding the distribution of genetic variations within and among populations. The PGG.Population database documents 7122 genomes representing 356 global populations from 107 countries and provides essential information for researchers to understand human genomic diversity and genetic ancestry. These data and information can facilitate the design of research studies and the interpretation of results of both evolutionary and medical studies involving human populations. The database is carefully maintained and constantly updated when new data are available. We included miscellaneous functions and a user-friendly graphical interface for visualization of genomic diversity, population relationships (genetic affinity), ancestral makeup, footprints of natural selection, and population history etc. Moreover, PGG.Population provides a useful feature for users to analyze data and visualize results in a dynamic style via online illustration. The long-term ambition of the PGG.Population, together with the joint efforts from other researchers who contribute their data to our database, is to create a comprehensive depository of geographic and ethnic variation of human genome, as well as a platform bringing influence on future practitioners of medicine and clinical investigators. PGG.Population is available at https://www.pggpopulation.org. PMID:29112749
Capability for Integrated Systems Risk-Reduction Analysis
NASA Technical Reports Server (NTRS)
Mindock, J.; Lumpkins, S.; Shelhamer, M.
2016-01-01
NASA's Human Research Program (HRP) is working to increase the likelihoods of human health and performance success during long-duration missions, and subsequent crew long-term health. To achieve these goals, there is a need to develop an integrated understanding of how the complex human physiological-socio-technical mission system behaves in spaceflight. This understanding will allow HRP to provide cross-disciplinary spaceflight countermeasures while minimizing resources such as mass, power, and volume. This understanding will also allow development of tools to assess the state of and enhance the resilience of individual crewmembers, teams, and the integrated mission system. We will discuss a set of risk-reduction questions that has been identified to guide the systems approach necessary to meet these needs. In addition, a framework of factors influencing human health and performance in space, called the Contributing Factor Map (CFM), is being applied as the backbone for incorporating information addressing these questions from sources throughout HRP. Using the common language of the CFM, information from sources such as the Human System Risk Board summaries, Integrated Research Plan, and HRP-funded publications has been combined and visualized in ways that allow insight into cross-disciplinary interconnections in a systematic, standardized fashion. We will show examples of these visualizations. We will also discuss applications of the resulting analysis capability that can inform science portfolio decisions, such as areas in which cross-disciplinary solicitations or countermeasure development will potentially be fruitful.
A New Conceptualization of Human Visual Sensory-Memory
Öğmen, Haluk; Herzog, Michael H.
2016-01-01
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson–Shiffrin “modal model” forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory. PMID:27375519
A New Conceptualization of Human Visual Sensory-Memory.
Öğmen, Haluk; Herzog, Michael H
2016-01-01
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson-Shiffrin "modal model" forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory.
NASA Astrophysics Data System (ADS)
Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing
2016-06-01
Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.
BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort
Avgan, Nesli; Sutherland, Heidi G.; Spriggens, Lauren K.; Yu, Chieh; Ibrahim, Omar; Bellis, Claire; Haupt, Larisa M.; Shum, David H. K.; Griffiths, Lyn R.
2017-01-01
Brain-derived neurotrophic factor (BDNF) is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265) and long-term visual memory (p-value = 0.003) in a small cohort (n = 181) comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale—Fourth Edition subtests Visual Reproduction I and II (VR I and II). VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism (p-value = 0.006)) that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus) that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF, and its anti-sense transcript BDNF-AS, in long-term visual memory performance. PMID:28304362
BDNF Variants May Modulate Long-Term Visual Memory Performance in a Healthy Cohort.
Avgan, Nesli; Sutherland, Heidi G; Spriggens, Lauren K; Yu, Chieh; Ibrahim, Omar; Bellis, Claire; Haupt, Larisa M; Shum, David H K; Griffiths, Lyn R
2017-03-17
Brain-derived neurotrophic factor (BDNF) is involved in numerous cognitive functions including learning and memory. BDNF plays an important role in synaptic plasticity in humans and rats with BDNF shown to be essential for the formation of long-term memories. We previously identified a significant association between the BDNF Val66Met polymorphism (rs6265) and long-term visual memory ( p -value = 0.003) in a small cohort ( n = 181) comprised of healthy individuals who had been phenotyped for various aspects of memory function. In this study, we have extended the cohort to 597 individuals and examined multiple genetic variants across both the BDNF and BDNF-AS genes for association with visual memory performance as assessed by the Wechsler Memory Scale-Fourth Edition subtests Visual Reproduction I and II (VR I and II). VR I assesses immediate visual memory, whereas VR II assesses long-term visual memory. Genetic association analyses were performed for 34 single nucleotide polymorphisms genotyped on Illumina OmniExpress BeadChip arrays with the immediate and long-term visual memory phenotypes. While none of the BDNF and BDNF-AS variants were shown to be significant for immediate visual memory, we found 10 variants (including the Val66Met polymorphism ( p -value = 0.006)) that were nominally associated, and three variants (two variants in BDNF and one variant in the BDNF-AS locus) that were significantly associated with long-term visual memory. Our data therefore suggests a potential role for BDNF , and its anti-sense transcript BDNF-AS , in long-term visual memory performance.
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Homman-Ludiye, Jihane; Bourne, James A.
2014-01-01
The integration of the visual stimulus takes place at the level of the neocortex, organized in anatomically distinct and functionally unique areas. Primates, including humans, are heavily dependent on vision, with approximately 50% of their neocortical surface dedicated to visual processing and possess many more visual areas than any other mammal, making them the model of choice to study visual cortical arealisation. However, in order to identify the mechanisms responsible for patterning the developing neocortex, specifying area identity as well as elucidate events that have enabled the evolution of the complex primate visual cortex, it is essential to gain access to the cortical maps of alternative species. To this end, species including the mouse have driven the identification of cellular markers, which possess an area-specific expression profile, the development of new tools to label connections and technological advance in imaging techniques enabling monitoring of cortical activity in a behaving animal. In this review we present non-primate species that have contributed to elucidating the evolution and development of the visual cortex. We describe the current understanding of the mechanisms supporting the establishment of areal borders during development, mainly gained in the mouse thanks to the availability of genetically modified lines but also the limitations of the mouse model and the need for alternate species. PMID:25071460
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Parrish, Audrey E; Agrillo, Christian; Perdue, Bonnie M; Beran, Michael J
2016-02-01
One approach to gaining a better understanding of how we perceive the world is to assess the errors that human and nonhuman animals make in perceptual processing. Developmental and comparative perspectives can contribute to identifying the mechanisms that underlie systematic perceptual errors often referred to as perceptual illusions. In the visual domain, some illusions appear to remain constant across the lifespan, whereas others change with age. From a comparative perspective, many of the illusions observed in humans appear to be shared with nonhuman primates. Numerosity illusions are a subset of visual illusions and occur when the spatial arrangement of stimuli within a set influences the perception of quantity. Previous research has found one such illusion that readily occurs in human adults, the Solitaire illusion. This illusion appears to be less robust in two monkey species, rhesus macaques and capuchin monkeys. We attempted to clarify the ontogeny of this illusion from a developmental and comparative perspective by testing human children and task-naïve capuchin monkeys in a computerized quantity judgment task. The overall performance of the monkeys suggested that they perceived the numerosity illusion, although there were large differences among individuals. Younger children performed similarly to the monkeys, whereas older children more consistently perceived the illusion. These findings suggest that human-unique perceptual experiences with the world might play an important role in the emergence of the Solitaire illusion in human adults, although other factors also may contribute. Copyright © 2015 Elsevier Inc. All rights reserved.
The multisensory function of the human primary visual cortex.
Murray, Micah M; Thelen, Antonia; Thut, Gregor; Romei, Vincenzo; Martuzzi, Roberto; Matusz, Pawel J
2016-03-01
It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.
BatMass: a Java Software Platform for LC-MS Data Visualization in Proteomics and Metabolomics.
Avtonomov, Dmitry M; Raskind, Alexander; Nesvizhskii, Alexey I
2016-08-05
Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC-MS-based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC-MS data are often overlooked, and assessment of an experiment's success is based on some derived metrics such as "the number of identified compounds". The human brain interprets visual data much better than plain text, hence the saying "a picture is worth a thousand words". Here, we present the BatMass software package, which allows for performing quick quality control of raw LC-MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC-MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration.
BatMass: a Java software platform for LC/MS data visualization in proteomics and metabolomics
Avtonomov, Dmitry; Raskind, Alexander; Nesvizhskii, Alexey I.
2017-01-01
Mass spectrometry (MS) coupled to liquid chromatography (LC) is a commonly used technique in metabolomic and proteomic research. As the size and complexity of LC/MS based experiments grow, it becomes increasingly more difficult to perform quality control of both raw data and processing results. In a practical setting, quality control steps for raw LC/MS data are often overlooked and assessment of an experiment's success is based on some derived metrics such as “the number of identified compounds”. Human brain interprets visual data much better than plain text, hence the saying “a picture is worth a thousand words”. Here we present BatMass software package which allows to perform quick quality control of raw LC/MS data through its fast visualization capabilities. It also serves as a testbed for developers of LC/MS data processing algorithms by providing a data access library for open mass spectrometry file formats and a means of visually mapping processing results back to the original data. We illustrate the utility of BatMass with several use cases of quality control and data exploration. PMID:27306858
The impact of attentional, linguistic, and visual features during object naming
Clarke, Alasdair D. F.; Coco, Moreno I.; Keller, Frank
2013-01-01
Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently. PMID:24379792
Visually-guided attention enhances target identification in a complex auditory scene.
Best, Virginia; Ozmeral, Erol J; Shinn-Cunningham, Barbara G
2007-06-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors.
Visually-guided Attention Enhances Target Identification in a Complex Auditory Scene
Ozmeral, Erol J.; Shinn-Cunningham, Barbara G.
2007-01-01
In auditory scenes containing many similar sound sources, sorting of acoustic information into streams becomes difficult, which can lead to disruptions in the identification of behaviorally relevant targets. This study investigated the benefit of providing simple visual cues for when and/or where a target would occur in a complex acoustic mixture. Importantly, the visual cues provided no information about the target content. In separate experiments, human subjects either identified learned birdsongs in the presence of a chorus of unlearned songs or recalled strings of spoken digits in the presence of speech maskers. A visual cue indicating which loudspeaker (from an array of five) would contain the target improved accuracy for both kinds of stimuli. A cue indicating which time segment (out of a possible five) would contain the target also improved accuracy, but much more for birdsong than for speech. These results suggest that in real world situations, information about where a target of interest is located can enhance its identification, while information about when to listen can also be helpful when targets are unfamiliar or extremely similar to their competitors. PMID:17453308
Visual-search models for location-known detection tasks
NASA Astrophysics Data System (ADS)
Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.
2017-03-01
Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.
Three-dimensional ray tracing for refractive correction of human eye ametropies
NASA Astrophysics Data System (ADS)
Jimenez-Hernandez, J. A.; Diaz-Gonzalez, G.; Trujillo-Romero, F.; Iturbe-Castillo, M. D.; Juarez-Salazar, R.; Santiago-Alvarado, A.
2016-09-01
Ametropies of the human eye, are refractive defects hampering the correct imaging on the retina. The most common ways to correct them is by means of spectacles, contact lenses, and modern methods as laser surgery. However, in any case it is very important to identify the ametropia grade for designing the optimum correction action. In the case of laser surgery, it is necessary to define a new shape of the cornea in order to obtain the wanted refractive correction. Therefore, a computational tool to calculate the focal length of the optical system of the eye versus variations on its geometrical parameters is required. Additionally, a clear and understandable visualization of the evaluation process is desirable. In this work, a model of the human eye based on geometrical optics principles is presented. Simulations of light rays coming from a punctual source at six meter from the cornea are shown. We perform a ray-tracing in three dimensions in order to visualize the focusing regions and estimate the power of the optical system. The common parameters of ametropies can be easily modified and analyzed in the simulation by an intuitive graphic user interface.
Human listening studies reveal insights into object features extracted by echolocating dolphins
NASA Astrophysics Data System (ADS)
Delong, Caroline M.; Au, Whitlow W. L.; Roitblat, Herbert L.
2004-05-01
Echolocating dolphins extract object feature information from the acoustic parameters of object echoes. However, little is known about which object features are salient to dolphins or how they extract those features. To gain insight into how dolphins might be extracting feature information, human listeners were presented with echoes from objects used in a dolphin echoic-visual cross-modal matching task. Human participants performed a task similar to the one the dolphin had performed; however, echoic samples consisting of 23-echo trains were presented via headphones. The participants listened to the echoic sample and then visually selected the correct object from among three alternatives. The participants performed as well as or better than the dolphin (M=88.0% correct), and reported using a combination of acoustic cues to extract object features (e.g., loudness, pitch, timbre). Participants frequently reported using the pattern of aural changes in the echoes across the echo train to identify the shape and structure of the objects (e.g., peaks in loudness or pitch). It is likely that dolphins also attend to the pattern of changes across echoes as objects are echolocated from different angles.
Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M
2012-08-01
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
McMenamin, Brenton W.; Marsolek, Chad J.; Morseth, Brianna K.; Speer, MacKenzie F.; Burton, Philip C.; Burgund, E. Darcy
2016-01-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities – an abstract category (AC) subsystem that operates effectively in the left hemisphere, and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad fronto-parietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue/probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations. PMID:26883940
McMenamin, Brenton W; Marsolek, Chad J; Morseth, Brianna K; Speer, MacKenzie F; Burton, Philip C; Burgund, E Darcy
2016-06-01
Object categorization and exemplar identification place conflicting demands on the visual system, yet humans easily perform these fundamentally contradictory tasks. Previous studies suggest the existence of dissociable visual processing subsystems to accomplish the two abilities-an abstract category (AC) subsystem that operates effectively in the left hemisphere and a specific exemplar (SE) subsystem that operates effectively in the right hemisphere. This multiple subsystems theory explains a range of visual abilities, but previous studies have not explored what mechanisms exist for coordinating the function of multiple subsystems and/or resolving the conflicts that would arise between them. We collected functional MRI data while participants performed two variants of a cue-probe working memory task that required AC or SE processing. During the maintenance phase of the task, the bilateral intraparietal sulcus (IPS) exhibited hemispheric asymmetries in functional connectivity consistent with exerting proactive control over the two visual subsystems: greater connectivity to the left hemisphere during the AC task, and greater connectivity to the right hemisphere during the SE task. Moreover, probe-evoked activation revealed activity in a broad frontoparietal network (containing IPS) associated with reactive control when the two visual subsystems were in conflict, and variations in this conflict signal across trials was related to the visual similarity of the cue-probe stimulus pairs. Although many studies have confirmed the existence of multiple visual processing subsystems, this study is the first to identify the mechanisms responsible for coordinating their operations.
Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.
Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun
2016-01-01
Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.
NASA Astrophysics Data System (ADS)
Xue, Lixia; Dai, Yun; Rao, Xuejun; Wang, Cheng; Hu, Yiyun; Liu, Qian; Jiang, Wenhan
2008-01-01
Higher-order aberrations correction can improve visual performance of human eye to some extent. To evaluate how much visual benefit can be obtained with higher-order aberrations correction we developed an adaptive optics vision simulator (AOVS). Dynamic real time optimized modal compensation was used to implement various customized higher-order ocular aberrations correction strategies. The experimental results indicate that higher-order aberrations correction can improve visual performance of human eye comparing with only lower-order aberration correction but the improvement degree and higher-order aberration correction strategy are different from each individual. Some subjects can acquire great visual benefit when higher-order aberrations were corrected but some subjects acquire little visual benefit even though all higher-order aberrations were corrected. Therefore, relative to general lower-order aberrations correction strategy, customized higher-order aberrations correction strategy is needed to obtain optimal visual improvement for each individual. AOVS provides an effective tool for higher-order ocular aberrations optometry for customized ocular aberrations correction.
Metabolic Mapping of the Brain's Response to Visual Stimulation: Studies in Humans.
ERIC Educational Resources Information Center
Phelps, Michael E.; Kuhl, David E.
1981-01-01
Studies demonstrate increasing glucose metabolic rates in human primary (PVC) and association (AVC) visual cortex as complexity of visual scenes increase. AVC increased more rapidly with scene complexity than PVC and increased local metabolic activities above control subject with eyes closed; indicates wide range and metabolic reserve of visual…
Development of Flexible Visual Recognition Memory in Human Infants
ERIC Educational Resources Information Center
Robinson, Astri J.; Pascalis, Olivier
2004-01-01
Research using the visual paired comparison task has shown that visual recognition memory across changing contexts is dependent on the integrity of the hippocampal formation in human adults and in monkeys. The acquisition of contextual flexibility may contribute to the change in memory performance that occurs late in the first year of life. To…
Temporal dynamics of figure-ground segregation in human vision.
Neri, Peter; Levi, Dennis M
2007-01-01
The segregation of figure from ground is arguably one of the most fundamental operations in human vision. Neural signals reflecting this operation appear in cortex as early as 50 ms and as late as 300 ms after presentation of a visual stimulus, but it is not known when these signals are used by the brain to construct the percepts of figure and ground. We used psychophysical reverse correlation to identify the temporal window for figure-ground signals in human perception and found it to lie within the range of 100-160 ms. Figure enhancement within this narrow temporal window was transient rather than sustained as may be expected from measurements in single neurons. These psychophysical results prompt and guide further electrophysiological studies.
The Vestibular System and Human Dynamic Space Orientation
NASA Technical Reports Server (NTRS)
Meiry, J. L.
1966-01-01
The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.
The Naked Truth: The Face and Body Sensitive N170 Response Is Enhanced for Nude Bodies
Hietanen, Jari K.; Nummenmaa, Lauri
2011-01-01
Recent event-related potential studies have shown that the occipitotemporal N170 component - best known for its sensitivity to faces - is also sensitive to perception of human bodies. Considering that in the timescale of evolution clothing is a relatively new invention that hides the bodily features relevant for sexual selection and arousal, we investigated whether the early N170 brain response would be enhanced to nude over clothed bodies. In two experiments, we measured N170 responses to nude bodies, bodies wearing swimsuits, clothed bodies, faces, and control stimuli (cars). We found that the N170 amplitude was larger to opposite and same-sex nude vs. clothed bodies. Moreover, the N170 amplitude increased linearly as the amount of clothing decreased from full clothing via swimsuits to nude bodies. Strikingly, the N170 response to nude bodies was even greater than that to faces, and the N170 amplitude to bodies was independent of whether the face of the bodies was visible or not. All human stimuli evoked greater N170 responses than did the control stimulus. Autonomic measurements and self-evaluations showed that nude bodies were affectively more arousing compared to the other stimulus categories. We conclude that the early visual processing of human bodies is sensitive to the visibility of the sex-related features of human bodies and that the visual processing of other people's nude bodies is enhanced in the brain. This enhancement is likely to reflect affective arousal elicited by nude bodies. Such facilitated visual processing of other people's nude bodies is possibly beneficial in identifying potential mating partners and competitors, and for triggering sexual behavior. PMID:22110574
Murthy, Krishna R; Dammalli, Manjunath; Pinto, Sneha M; Murthy, Kalpana Babu; Nirujogi, Raja Sekhar; Madugundu, Anil K; Dey, Gourav; Subbannayya, Yashwanth; Mishra, Uttam Kumar; Nair, Bipin; Gowda, Harsha; Prasad, T S Keshava
2016-09-01
The annual economic burden of visual disorders in the United States was estimated at $139 billion. Ophthalmology is therefore one of the salient application fields of postgenomics biotechnologies such as proteomics in the pursuit of global precision medicine. Interestingly, the protein composition of the human iris tissue still remains largely unexplored. In this context, the uveal tract constitutes the vascular middle coat of the eye and is formed by the choroid, ciliary body, and iris. The iris forms the anterior most part of the uvea. It is a thin muscular diaphragm with a central perforation called pupil. Inflammation of the uvea is termed uveitis and causes reduced vision or blindness. However, the pathogenesis of the spectrum of diseases causing uveitis is still not very well understood. We investigated the proteome of the iris tissue harvested from healthy donor eyes that were enucleated within 6 h of death using high-resolution Fourier transform mass spectrometry. A total of 4959 nonredundant proteins were identified in the human iris, which included proteins involved in signaling, cell communication, metabolism, immune response, and transport. This study is the first attempt to comprehensively profile the global proteome of the human iris tissue and, thus, offers the potential to facilitate biomedical research into pathological diseases of the uvea such as Behcet's disease, Vogt Koyonagi Harada's disease, and juvenile rheumatoid arthritis. Finally, we make a call to the broader visual health and ophthalmology community that proteomics offers a veritable prospect to obtain a systems scale, functional, and dynamic picture of the eye tissue in health and disease. This knowledge is ultimately pertinent for precision medicine diagnostics and therapeutics innovation to address the pressing needs of the 21st century visual health.
Hoffmann, Michael B; Wolynski, Barbara; Meltendorf, Synke; Behrens-Baumann, Wolfgang; Käsmann-Kellner, Barbara
2008-06-01
In albinism, part of the temporal retina projects abnormally to the contralateral hemisphere. A residual misprojection is also evident in feline carriers that are heterozygous for tyrosinase-related albinism. This study was conducted to test whether such residual abnormalities can also be identified in human carriers of oculocutaneous tyrosinase-related albinism (OCA1a). In eight carriers heterozygous for OCA1a and in eight age- and sex-matched control subjects, monocular pattern-reversal and -onset multifocal visual evoked potentials (mfVEPs) were recorded at 60 locations comprising a visual field of 44 degrees diameter (VERIS 5.01; EDI, San Mateo, CA). For each eye and each stimulus location, interhemispheric difference potentials were calculated and correlated with each other, to assess the lateralization of the responses: positive and negative correlations indicate lateralizations on the same or opposite hemispheres, respectively. Misrouted optic nerves are expected to yield negative interocular correlations. The analysis also allowed for the assessment of the sensitivity and specificity of the detection of projection abnormalities. No significant differences were obtained for the distributions of the interocular correlation coefficients of controls and carriers. Consequently, no local representation abnormalities were observed in the group of OCA1a carriers. For pattern-reversal and -onset stimulation, an assessment of the control data yielded similar specificity (97.9% and 94.6%) and sensitivity (74.4% and 74.8%) estimates for the detection of projection abnormalities. The absence of evidence for projection abnormalities in human OCA1a carriers contrasts with the previously reported evidence for abnormalities in cat-carriers of tyrosinase-related albinism. This discrepancy suggests that animal models of albinism may not provide a match to human albinism.
Probst, Alexander J.; Auerbach, Anna K.; Moissl-Eichinger, Christine
2013-01-01
The recent era of exploring the human microbiome has provided valuable information on microbial inhabitants, beneficials and pathogens. Screening efforts based on DNA sequencing identified thousands of bacterial lineages associated with human skin but provided only incomplete and crude information on Archaea. Here, we report for the first time the quantification and visualization of Archaea from human skin. Based on 16 S rRNA gene copies Archaea comprised up to 4.2% of the prokaryotic skin microbiome. Most of the gene signatures analyzed belonged to the Thaumarchaeota, a group of Archaea we also found in hospitals and clean room facilities. The metabolic potential for ammonia oxidation of the skin-associated Archaea was supported by the successful detection of thaumarchaeal amoA genes in human skin samples. However, the activity and possible interaction with human epithelial cells of these associated Archaea remains an open question. Nevertheless, in this study we provide evidence that Archaea are part of the human skin microbiome and discuss their potential for ammonia turnover on human skin. PMID:23776475
Bayesian learning of visual chunks by human observers
Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté
2008-01-01
Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353
Mechanisms of migraine aura revealed by functional MRI in human visual cortex
Hadjikhani, Nouchine; Sanchez del Rio, Margarita; Wu, Ona; Schwartz, Denis; Bakker, Dick; Fischl, Bruce; Kwong, Kenneth K.; Cutrer, F. Michael; Rosen, Bruce R.; Tootell, Roger B. H.; Sorensen, A. Gregory; Moskowitz, Michael A.
2001-01-01
Cortical spreading depression (CSD) has been suggested to underlie migraine visual aura. However, it has been challenging to test this hypothesis in human cerebral cortex. Using high-field functional MRI with near-continuous recording during visual aura in three subjects, we observed blood oxygenation level-dependent (BOLD) signal changes that demonstrated at least eight characteristics of CSD, time-locked to percept/onset of the aura. Initially, a focal increase in BOLD signal (possibly reflecting vasodilation), developed within extrastriate cortex (area V3A). This BOLD change progressed contiguously and slowly (3.5 ± 1.1 mm/min) over occipital cortex, congruent with the retinotopy of the visual percept. Following the same retinotopic progression, the BOLD signal then diminished (possibly reflecting vasoconstriction after the initial vasodilation), as did the BOLD response to visual activation. During periods with no visual stimulation, but while the subject was experiencing scintillations, BOLD signal followed the retinotopic progression of the visual percept. These data strongly suggest that an electrophysiological event such as CSD generates the aura in human visual cortex. PMID:11287655
Grossmann, Kay; Arnold, Thuro; Steudtner, Robin; Weiss, Stefan; Bernhard, Gert
2009-08-01
Low-temperature alteration reactions on uranium phases may lead to the mobilization of uranium and thereby poses a potential threat to humans living close to uranium-contaminated sites. In this study, the surface alteration of uraninite (UO(2)) and uranium tetrachloride (UCl(4)) in air atmosphere was studied by confocal laser scanning microscopy (CLSM) and laser-induced fluorescence spectroscopy using an excitation wavelength of 408 nm. It was found that within minutes the oxidation state on the surface of the uraninite and the uranium tetrachloride changed. During the surface alteration process U(IV) atoms on the uraninite and uranium tetrachloride surface became stepwise oxidized by a one-electron step at first to U(V) and then further to U(VI). These observed changes in the oxidation states of the uraninite surface were microscopically visualized and spectroscopically identified on the basis of their fluorescence emission signal. A fluorescence signal in the wavelength range of 415-475 nm was indicative for metastable uranium(V), and a fluorescence signal in the range of 480-560 nm was identified as uranium(VI). In addition, the oxidation process of tetravalent uranium in aqueous solution at pH 0.3 was visualized by CLSM and U(V) was fluorescence spectroscopically identified. The combination of microscopy and fluorescence spectroscopy provided a very convincing visualization of the brief presence of U(V) as a metastable reaction intermediate and of the simultaneous coexistence of the three states U(IV), U(V), and U(VI). These results have a significant importance for fundamental uranium redox chemistry and should contribute to a better understanding of the geochemical behavior of uranium in nature.
Pilot Task Profiles, Human Factors, And Image Realism
NASA Astrophysics Data System (ADS)
McCormick, Dennis
1982-06-01
Computer Image Generation (CIG) visual systems provide real time scenes for state-of-the-art flight training simulators. The visual system reauires a greater understanding of training tasks, human factors, and the concept of image realism to produce an effective and efficient training scene than is required by other types of visual systems. Image realism must be defined in terms of pilot visual information reauirements. Human factors analysis of training and perception is necessary to determine the pilot's information requirements. System analysis then determines how the CIG and display device can best provide essential information to the pilot. This analysis procedure ensures optimum training effectiveness and system performance.
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
Comparison of Object Recognition Behavior in Human and Monkey
Rajalingham, Rishi; Schmidt, Kailyn
2015-01-01
Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324
Evaluation of stereoscopic display with visual function and interview
NASA Astrophysics Data System (ADS)
Okuyama, Fumio
1999-05-01
The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.
Video Games: A Human Factors Guide to Visual Display Design and Instructional System Design
1984-04-01
Electronic video games have many of the same technological and psychological characteristics that are found in military computer-based systems. For...both of which employ video games as experimental stimuli, are presented here. The first research program seeks to identify and exploit the...characteristics of video games in the design of game-based training devices. The second program is designed to explore the effects of electronic video display
Qualitative similarities in the visual short-term memory of pigeons and people.
Gibson, Brett; Wasserman, Edward; Luck, Steven J
2011-10-01
Visual short-term memory plays a key role in guiding behavior, and individual differences in visual short-term memory capacity are strongly predictive of higher cognitive abilities. To provide a broader evolutionary context for understanding this memory system, we directly compared the behavior of pigeons and humans on a change detection task. Although pigeons had a lower storage capacity and a higher lapse rate than humans, both species stored multiple items in short-term memory and conformed to the same basic performance model. Thus, despite their very different evolutionary histories and neural architectures, pigeons and humans have functionally similar visual short-term memory systems, suggesting that the functional properties of visual short-term memory are subject to similar selective pressures across these distant species.
Kawai, Nobuyuki; He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.
He, Hongshen
2016-01-01
Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686
Decoding complex flow-field patterns in visual working memory.
Christophel, Thomas B; Haynes, John-Dylan
2014-05-01
There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Cognitive Task Analysis of the Battalion Level Visualization Process
2007-10-01
of the visualization space are identified using commonly understood doctrinal language and mnemonic devices. a Degree to which the commander and staff...the elements of the visualization space are identified using commonly understood doctrinal language and mnemonic devices. Visualization elements are...11 skill areas were identified as potential focal points for future training development. The findings were used to design and develop exemplar
2011-08-01
generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-01-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-09-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.
NASA Technical Reports Server (NTRS)
Hopkins, William D.; Washburn, David A.; Rumbaugh, Duane M.
1990-01-01
Visual forms were unilaterally presented using a video-task paradigm to ten humans, chimpanzees, and two rhesus monkeys to determine whether hemispheric advantages existed in the processing of these stimuli. Both accuracy and reaction time served as dependent measures. For the chimpanzees, a significant right hemisphere advantage was found within the first three test sessions. The humans and monkeys failed to show a hemispheric advantage as determined by accuracy scores. Analysis of reaction time data revealed a significant left hemisphere advantage for the monkeys. A visual half-field x block interaction was found for the chimpanzees, with a significant left visual field advantage in block two, whereas a right visual field advantage was found in block four. In the human subjects, a left visual field advantage was found in block three when they used their right hands to respond. The results are discussed in relation to recent reports of hemispheric advantages for nonhuman primates.
Grating-based tomography of human tissues
NASA Astrophysics Data System (ADS)
Müller, Bert; Schulz, Georg; Mehlin, Andrea; Herzen, Julia; Lang, Sabrina; Holme, Margaret; Zanette, Irene; Hieber, Simone; Deyhle, Hans; Beckmann, Felix; Pfeiffer, Franz; Weitkamp, Timm
2012-07-01
The development of therapies to improve our health requires a detailed knowledge on the anatomy of soft tissues from the human body down to the cellular level. Grating-based phase contrast micro computed tomography using synchrotron radiation provides a sensitivity, which allows visualizing micrometer size anatomical features in soft tissue without applying any contrast agent. We show phase contrast tomography data of human brain, tumor vessels and constricted arteries from the beamline ID 19 (ESRF) and urethral tissue from the beamline W2 (HASYLAB/DESY) with micrometer resolution. Here, we demonstrate that anatomical features can be identified within brain tissue as well known from histology. Using human urethral tissue, the application of two photon energies is compared. Tumor vessels thicker than 20 μm can be perfectly segmented. The morphology of coronary arteries can be better extracted in formalin than after paraffin embedding.
A blind human expert echolocator shows size constancy for objects perceived by echoes.
Milne, Jennifer L; Anello, Mimma; Goodale, Melvyn A; Thaler, Lore
2015-01-01
Some blind humans make clicking noises with their mouth and use the reflected echoes to perceive objects and surfaces. This technique can operate as a crude substitute for vision, allowing human echolocators to perceive silent, distal objects. Here, we tested if echolocation would, like vision, show size constancy. To investigate this, we asked a blind expert echolocator (EE) to echolocate objects of different physical sizes presented at different distances. The EE consistently identified the true physical size of the objects independent of distance. In contrast, blind and blindfolded sighted controls did not show size constancy, even when encouraged to use mouth clicks, claps, or other signals. These findings suggest that size constancy is not a purely visual phenomenon, but that it can operate via an auditory-based substitute for vision, such as human echolocation.
Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System
Ajina, Sara; Bridge, Holly
2017-01-01
Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337
Two-stage perceptual learning to break visual crowding.
Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang
2016-01-01
When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).
AutoBD: Automated Bi-Level Description for Scalable Fine-Grained Visual Categorization.
Yao, Hantao; Zhang, Shiliang; Yan, Chenggang; Zhang, Yongdong; Li, Jintao; Tian, Qi
Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.
Graphical Visualization of Human Exploration Capabilities
NASA Technical Reports Server (NTRS)
Rodgers, Erica M.; Williams-Byrd, Julie; Arney, Dale C.; Simon, Matthew A.; Williams, Phillip A.; Barsoum, Christopher; Cowan, Tyler; Larman, Kevin T.; Hay, Jason; Burg, Alex
2016-01-01
NASA's pioneering space strategy will require advanced capabilities to expand the boundaries of human exploration on the Journey to Mars (J2M). The Evolvable Mars Campaign (EMC) architecture serves as a framework to identify critical capabilities that need to be developed and tested in order to enable a range of human exploration destinations and missions. Agency-wide System Maturation Teams (SMT) are responsible for the maturation of these critical exploration capabilities and help formulate, guide and resolve performance gaps associated with the EMC-identified capabilities. Systems Capability Organization Reporting Engine boards (SCOREboards) were developed to integrate the SMT data sets into cohesive human exploration capability stories that can be used to promote dialog and communicate NASA's exploration investments. Each SCOREboard provides a graphical visualization of SMT capability development needs that enable exploration missions, and presents a comprehensive overview of data that outlines a roadmap of system maturation needs critical for the J2M. SCOREboards are generated by a computer program that extracts data from a main repository, sorts the data based on a tiered data reduction structure, and then plots the data according to specified user inputs. The ability to sort and plot varying data categories provides the flexibility to present specific SCOREboard capability roadmaps based on customer requests. This paper presents the development of the SCOREboard computer program and shows multiple complementary, yet different datasets through a unified format designed to facilitate comparison between datasets. Example SCOREboard capability roadmaps are presented followed by a discussion of how the roadmaps are used to: 1) communicate capability developments and readiness of systems for future missions, and 2) influence the definition of NASA's human exploration investment portfolio through capability-driven processes. The paper concludes with a description of planned future work to modify the computer program to include additional data and of alternate capability roadmap formats currently under consideration.
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
Database structure for the Laser Accident and Incident Registry (LAIR)
NASA Astrophysics Data System (ADS)
Ness, James W.; Hoxie, Stephen W.; Zwick, Harry; Stuck, Bruce E.; Lund, David J.; Schmeisser, Elmar T.
1997-05-01
The ubiquity of laser radiation in military, medical, entertainment, telecommunications and research industries and the significant risk, of eye injury from this radiation are firmly established. While important advances have been made in understanding laser bioeffects using animal analogues and clinical data, the relationships among patient characteristics, exposure conditions, severity of the resulting injury, and visual function are fragmented, complex and varied. Although accident cases are minimized through laser safety regulations and control procedures, accumulated accident case information by the laser eye injury evaluation center warranted the development of a laser accident and incident registry. The registry includes clinical data for validating and refining hypotheses on injury and recovery mechanisms; a means for analyzing mechanisms unique to human injury; and a means for identifying future areas of investigation. The relational database supports three major sections: (1) the physics section defines exposure circumstances, (2) the clinical/ophthalmologic section includes fundus and scanning laser ophthalmoscope images, and (3) the visual functions section contains specialized visual function exam results. Tools are available for subject-matter experts to estimate parameters like total intraocular energy, ophthalmic lesion grade, and exposure probability. The database is research oriented to provide a means for generating empirical relationships to identify symptoms for definitive diagnosis and treatment of laser induced eye injuries.
Neural signatures of lexical tone reading.
Kwok, Veronica P Y; Wang, Tianfu; Chen, Siping; Yakpo, Kofi; Zhu, Linlin; Fox, Peter T; Tan, Li Hai
2015-01-01
Research on how lexical tone is neuroanatomically represented in the human brain is central to our understanding of cortical regions subserving language. Past studies have exclusively focused on tone perception of the spoken language, and little is known as to the lexical tone processing in reading visual words and its associated brain mechanisms. In this study, we performed two experiments to identify neural substrates in Chinese tone reading. First, we used a tone judgment paradigm to investigate tone processing of visually presented Chinese characters. We found that, relative to baseline, tone perception of printed Chinese characters were mediated by strong brain activation in bilateral frontal regions, left inferior parietal lobule, left posterior middle/medial temporal gyrus, left inferior temporal region, bilateral visual systems, and cerebellum. Surprisingly, no activation was found in superior temporal regions, brain sites well known for speech tone processing. In activation likelihood estimation (ALE) meta-analysis to combine results of relevant published studies, we attempted to elucidate whether the left temporal cortex activities identified in Experiment one is consistent with those found in previous studies of auditory lexical tone perception. ALE results showed that only the left superior temporal gyrus and putamen were critical in auditory lexical tone processing. These findings suggest that activation in the superior temporal cortex associated with lexical tone perception is modality-dependent. © 2014 Wiley Periodicals, Inc.
Brain activity during driving with distraction: an immersive fMRI study
Schweizer, Tom A.; Kan, Karen; Hung, Yuwen; Tam, Fred; Naglie, Gary; Graham, Simon J.
2013-01-01
Introduction: Non-invasive measurements of brain activity have an important role to play in understanding driving ability. The current study aimed to identify the neural underpinnings of human driving behavior by visualizing the areas of the brain involved in driving under different levels of demand, such as driving while distracted or making left turns at busy intersections. Materials and Methods: To capture brain activity during driving, we placed a driving simulator with a fully functional steering wheel and pedals in a 3.0 Tesla functional magnetic resonance imaging (fMRI) system. To identify the brain areas involved while performing different real-world driving maneuvers, participants completed tasks ranging from simple (right turns) to more complex (left turns at busy intersections). To assess the effects of driving while distracted, participants were asked to perform an auditory task while driving analogous to speaking on a hands-free device and driving. Results: A widely distributed brain network was identified, especially when making left turns at busy intersections compared to more simple driving tasks. During distracted driving, brain activation shifted dramatically from the posterior, visual and spatial areas to the prefrontal cortex. Conclusions: Our findings suggest that the distracted brain sacrificed areas in the posterior brain important for visual attention and alertness to recruit enough brain resources to perform a secondary, cognitive task. The present findings offer important new insights into the scientific understanding of the neuro-cognitive mechanisms of driving behavior and lay down an important foundation for future clinical research. PMID:23450757
Functional communication within a perceptual network processing letters and pseudoletters.
Herdman, Anthony T
2011-10-01
Many studies have identified regions within human ventral visual stream to be important for object identification and categorization; however, knowledge of how perceptual information is communicated within the visual network is still limited. Current theories posit that if a high correspondence between incoming sensory information and internal representations exists, then the object is rapidly identified, and if there is not, then the object requires extra detailed processing. Event-related responses from the present magnetoencephalography study showed two main effects. The N1m peak latencies were approximately 15 milliseconds earlier to familiar letters than to unfamiliar pseudoletters, and the N2m was more negative to pseudoletters than to letters. Event-related beamforming analyses identified these effects to be within bilateral visual cortices with a right lateralization for the N2m effect. Furthermore, functional connectivity analyses revealed that gamma-band (50-80 Hz) oscillatory phase synchronizations among occipital regions were greater to letters than to pseudoletters (around 85 milliseconds). However, during a later time interval between 245 and 375 milliseconds, pseudoletters elicited greater gamma-band phase synchronizations among a more distributed occipital network than did letters. These findings indicate that familiar object processing begins by at least 85 milliseconds, which could represent an initial match to an internal template. In addition, unfamiliar object processing persisted longer than that for familiar objects, which could reflect greater attention to inexperienced objects to determine their identity and/or to consolidate a new template to aid in future identification.
Electrophysiological evidence for attentional guidance by the contents of working memory.
Kumar, Sanjay; Soto, David; Humphreys, Glyn W
2009-07-01
The deployment of visual attention can be strongly modulated by stimuli matching the contents of working memory (WM), even when WM contents are detrimental to performance and salient bottom-up cues define the critical target [D. Soto et al. (2006)Vision Research, 46, 1010-1018]. Here we investigated the electrophysiological correlates of this early guidance of attention by WM in humans. Observers were presented with a prime to either identify or hold in memory. Subsequently, they had to search for a target line amongst different distractor lines. Each line was embedded within one of four objects and one of the distractor objects could match the stimulus held in WM. Behavioural data showed that performance was more strongly affected by the prime when it was held in memory than when it was merely identified. An electrophysiological measure of the efficiency of target selection (the N2pc) was also affected by the match between the item in WM and the location of the target in the search task. The N2pc was enhanced when the target fell in the same visual field as the re-presented (invalid) prime, compared with when the prime did not reappear in the search display (on neutral trials) and when the prime was contralateral to the target. Merely identifying the prime produced no effect on the N2pc component. The evidence suggests that WM modulates competitive interactions between the items in the visual field to determine the efficiency of target selection.
Audio-visual affective expression recognition
NASA Astrophysics Data System (ADS)
Huang, Thomas S.; Zeng, Zhihong
2007-11-01
Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.
Human microbiome visualization using 3D technology.
Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C
2011-01-01
High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.
Pigeon visual short-term memory directly compared to primates.
Wright, Anthony A; Elmore, L Caitlin
2016-02-01
Three pigeons were trained to remember arrays of 2-6 colored squares and detect which of two squares had changed color to test their visual short-term memory. Procedures (e.g., stimuli, displays, viewing times, delays) were similar to those used to test monkeys and humans. Following extensive training, pigeons performed slightly better than similarly trained monkeys, but both animal species were considerably less accurate than humans with the same array sizes (2, 4 and 6 items). Pigeons and monkeys showed calculated memory capacities of one item or less, whereas humans showed a memory capacity of 2.5 items. Despite the differences in calculated memory capacities, the pigeons' memory results, like those from monkeys and humans, were all well characterized by an inverse power-law function fit to d' values for the five display sizes. This characterization provides a simple, straightforward summary of the fundamental processing of visual short-term memory (how visual short-term memory declines with memory load) that emphasizes species similarities based upon similar functional relationships. By closely matching pigeon testing parameters to those of monkeys and humans, these similar functional relationships suggest similar underlying processes of visual short-term memory in pigeons, monkeys and humans. Copyright © 2015 Elsevier B.V. All rights reserved.
Single unit approaches to human vision and memory.
Kreiman, Gabriel
2007-08-01
Research on the visual system focuses on using electrophysiology, pharmacology and other invasive tools in animal models. Non-invasive tools such as scalp electroencephalography and imaging allow examining humans but show a much lower spatial and/or temporal resolution. Under special clinical conditions, it is possible to monitor single-unit activity in humans when invasive procedures are required due to particular pathological conditions including epilepsy and Parkinson's disease. We review our knowledge about the visual system and visual memories in the human brain at the single neuron level. The properties of the human brain seem to be broadly compatible with the knowledge derived from animal models. The possibility of examining high-resolution brain activity in conscious human subjects allows investigators to ask novel questions that are challenging to address in animal models.
Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J
2017-09-27
Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified particular visual features of the mouth region that predicted this valence effect, isolating the specific visual signal that might be driving this neural valence response. Copyright © 2017 the authors 0270-6474/17/379510-09$15.00/0.
Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin
2017-07-05
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
What makes a visualization memorable?
Borkin, Michelle A; Vo, Azalea A; Bylinskii, Zoya; Isola, Phillip; Sunkavalli, Shashank; Oliva, Aude; Pfister, Hanspeter
2013-12-01
An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.
Lobier, Muriel; Palva, J Matias; Palva, Satu
2018-01-15
Visuospatial attention prioritizes processing of attended visual stimuli. It is characterized by lateralized alpha-band (8-14 Hz) amplitude suppression in visual cortex and increased neuronal activity in a network of frontal and parietal areas. It has remained unknown what mechanisms coordinate neuronal processing among frontoparietal network and visual cortices and implement the attention-related modulations of alpha-band amplitudes and behavior. We investigated whether large-scale network synchronization could be such a mechanism. We recorded human cortical activity with magnetoencephalography (MEG) during a visuospatial attention task. We then identified the frequencies and anatomical networks of inter-areal phase synchronization from source localized MEG data. We found that visuospatial attention is associated with robust and sustained long-range synchronization of cortical oscillations exclusively in the high-alpha (10-14 Hz) frequency band. This synchronization connected frontal, parietal and visual regions and was observed concurrently with amplitude suppression of low-alpha (6-9 Hz) band oscillations in visual cortex. Furthermore, stronger high-alpha phase synchronization was associated with decreased reaction times to attended stimuli and larger suppression of alpha-band amplitudes. These results thus show that high-alpha band phase synchronization is functionally significant and could coordinate the neuronal communication underlying the implementation of visuospatial attention. Copyright © 2017 Elsevier Inc. All rights reserved.
Contini, Erika W; Wardle, Susan G; Carlson, Thomas A
2017-10-01
Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.
Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A
2014-08-01
The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.
Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe
2014-01-01
Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356
A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration.
Lakshminarasimhan, Kaushik J; Petsalis, Marina; Park, Hyeshin; DeAngelis, Gregory C; Pitkow, Xaq; Angelaki, Dora E
2018-06-20
Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration. Copyright © 2018 Elsevier Inc. All rights reserved.
Learning prosthetic vision: a virtual-reality study.
Chen, Spencer C; Hallum, Luke E; Lovell, Nigel H; Suaning, Gregg J
2005-09-01
Acceptance of prosthetic vision will be heavily dependent on the ability of recipients to form useful information from such vision. Training strategies to accelerate learning and maximize visual comprehension would need to be designed in the light of the factors affecting human learning under prosthetic vision. Some of these potential factors were examined in a visual acuity study using the Landolt C optotype under virtual-reality simulation of prosthetic vision. Fifteen normally sighted subjects were tested for 10-20 sessions. Potential learning factors were tested at p < 0.05 with regression models. Learning was most evident across-sessions, though 17% of sessions did express significant within-session trends. Learning was highly concentrated toward a critical range of optotype sizes, and subjects were less capable in identifying the closed optotype (a Landolt C with no gap, forming a closed annulus). Training for implant recipients should target these critical sizes and the closed optotype to extend the limit of visual comprehension. Although there was no evidence that image processing affected overall learning, subjects showed varying personal preferences.
Object-graphs for context-aware visual category discovery.
Lee, Yong Jae; Grauman, Kristen
2012-02-01
How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.
NASA Astrophysics Data System (ADS)
Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris
This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.
Perception of biological motion from size-invariant body representations.
Lappe, Markus; Wittinghofer, Karin; de Lussanet, Marc H E
2015-01-01
The visual recognition of action is one of the socially most important and computationally demanding capacities of the human visual system. It combines visual shape recognition with complex non-rigid motion perception. Action presented as a point-light animation is a striking visual experience for anyone who sees it for the first time. Information about the shape and posture of the human body is sparse in point-light animations, but it is essential for action recognition. In the posturo-temporal filter model of biological motion perception posture information is picked up by visual neurons tuned to the form of the human body before body motion is calculated. We tested whether point-light stimuli are processed through posture recognition of the human body form by using a typical feature of form recognition, namely size invariance. We constructed a point-light stimulus that can only be perceived through a size-invariant mechanism. This stimulus changes rapidly in size from one image to the next. It thus disrupts continuity of early visuo-spatial properties but maintains continuity of the body posture representation. Despite this massive manipulation at the visuo-spatial level, size-changing point-light figures are spontaneously recognized by naive observers, and support discrimination of human body motion.
Boutet, Isabelle; Collin, Charles A; MacLeod, Lindsey S; Messier, Claude; Holahan, Matthew R; Berry-Kravis, Elizabeth; Gandhi, Reno M; Kogan, Cary S
2018-01-01
To generate meaningful information, translational research must employ paradigms that allow extrapolation from animal models to humans. However, few studies have evaluated translational paradigms on the basis of defined validation criteria. We outline three criteria for validating translational paradigms. We then evaluate the Hebb-Williams maze paradigm (Hebb and Williams, 1946; Rabinovitch and Rosvold, 1951) on the basis of these criteria using Fragile X syndrome (FXS) as model disease. We focused on this paradigm because it allows direct comparison of humans and animals on tasks that are behaviorally equivalent (criterion #1) and because it measures spatial information processing, a cognitive domain for which FXS individuals and mice show impairments as compared to controls (criterion #2). We directly compared the performance of affected humans and mice across different experimental conditions and measures of behavior to identify which conditions produce comparable patterns of results in both species. Species differences were negligible for Mazes 2, 4, and 5 irrespective of the presence of visual cues, suggesting that these mazes could be used to measure spatial learning in both species. With regards to performance on the first trial, which reflects visuo-spatial problem solving, Mazes 5 and 9 without visual cues produced the most consistent results. We conclude that the Hebb-Williams mazes paradigm has the potential to be utilized in translational research to measure comparable cognitive functions in FXS humans and animals (criterion #3).
Zhang, Chao; Gao, Yang; Liu, Jiaojiao; Xue, Zhe; Lu, Yan; Deng, Lian; Tian, Lei; Feng, Qidi; Xu, Shuhua
2018-01-04
There are a growing number of studies focusing on delineating genetic variations that are associated with complex human traits and diseases due to recent advances in next-generation sequencing technologies. However, identifying and prioritizing disease-associated causal variants relies on understanding the distribution of genetic variations within and among populations. The PGG.Population database documents 7122 genomes representing 356 global populations from 107 countries and provides essential information for researchers to understand human genomic diversity and genetic ancestry. These data and information can facilitate the design of research studies and the interpretation of results of both evolutionary and medical studies involving human populations. The database is carefully maintained and constantly updated when new data are available. We included miscellaneous functions and a user-friendly graphical interface for visualization of genomic diversity, population relationships (genetic affinity), ancestral makeup, footprints of natural selection, and population history etc. Moreover, PGG.Population provides a useful feature for users to analyze data and visualize results in a dynamic style via online illustration. The long-term ambition of the PGG.Population, together with the joint efforts from other researchers who contribute their data to our database, is to create a comprehensive depository of geographic and ethnic variation of human genome, as well as a platform bringing influence on future practitioners of medicine and clinical investigators. PGG.Population is available at https://www.pggpopulation.org. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
2015-10-01
overview visualization to help clinicians identify patients that are changing and inserted these indices into the sepsis specific decision support...visualization, 4) Created a sepsis identification visualization tool to help clinicians identify patients headed for septic shock, and 5) Generated a...5 Sepsis Visualization
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Simultaneous chromatic and luminance human electroretinogram responses
Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan
2012-01-01
The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211
Stocco, Andrea; Prat, Chantel S.; Losey, Darby M.; Cronin, Jeneva A.; Wu, Joseph; Abernethy, Justin A.; Rao, Rajesh P. N.
2015-01-01
We present, to our knowledge, the first demonstration that a non-invasive brain-to-brain interface (BBI) can be used to allow one human to guess what is on the mind of another human through an interactive question-and-answering paradigm similar to the “20 Questions” game. As in previous non-invasive BBI studies in humans, our interface uses electroencephalography (EEG) to detect specific patterns of brain activity from one participant (the “respondent”), and transcranial magnetic stimulation (TMS) to deliver functionally-relevant information to the brain of a second participant (the “inquirer”). Our results extend previous BBI research by (1) using stimulation of the visual cortex to convey visual stimuli that are privately experienced and consciously perceived by the inquirer; (2) exploiting real-time rather than off-line communication of information from one brain to another; and (3) employing an interactive task, in which the inquirer and respondent must exchange information bi-directionally to collaboratively solve the task. The results demonstrate that using the BBI, ten participants (five inquirer-respondent pairs) can successfully identify a “mystery item” using a true/false question-answering protocol similar to the “20 Questions” game, with high levels of accuracy that are significantly greater than a control condition in which participants were connected through a sham BBI. PMID:26398267
Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations.
Shuster, Anastasia; Levy, Dino J
2018-01-01
Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing.
Common Sense in Choice: The Effect of Sensory Modality on Neural Value Representations
2018-01-01
Abstract Although it is well established that the ventromedial prefrontal cortex (vmPFC) represents value using a common currency across categories of rewards, it is unknown whether the vmPFC represents value irrespective of the sensory modality in which alternatives are presented. In the current study, male and female human subjects completed a decision-making task while their neural activity was recorded using functional magnetic resonance imaging. On each trial, subjects chose between a safe alternative and a lottery, which was presented visually or aurally. A univariate conjunction analysis revealed that the anterior portion of the vmPFC tracks subjective value (SV) irrespective of the sensory modality. Using a novel cross-modality multivariate classifier, we were able to decode auditory value based on visual trials and vice versa. In addition, we found that the visual and auditory sensory cortices, which were identified using functional localizers, are also sensitive to the value of stimuli, albeit in a modality-specific manner. Whereas both primary and higher-order auditory cortices represented auditory SV (aSV), only a higher-order visual area represented visual SV (vSV). These findings expand our understanding of the common currency network of the brain and shed a new light on the interplay between sensory and value information processing. PMID:29619408
The role of temporo-parietal junction (TPJ) in global Gestalt perception.
Huberle, Elisabeth; Karnath, Hans-Otto
2012-07-01
Grouping processes enable the coherent perception of our environment. A number of brain areas has been suggested to be involved in the integration of elements into objects including early and higher visual areas along the ventral visual pathway as well as motion-processing areas of the dorsal visual pathway. However, integration not only is required for the cortical representation of individual objects, but is also essential for the perception of more complex visual scenes consisting of several different objects and/or shapes. The present fMRI experiments aimed to address such integration processes. We investigated the neural correlates underlying the global Gestalt perception of hierarchically organized stimuli that allowed parametrical degrading of the object at the global level. The comparison of intact versus disturbed perception of the global Gestalt revealed a network of cortical areas including the temporo-parietal junction (TPJ), anterior cingulate cortex and the precuneus. The TPJ location corresponds well with the areas known to be typically lesioned in stroke patients with simultanagnosia following bilateral brain damage. These patients typically show a deficit in identifying the global Gestalt of a visual scene. Further, we found the closest relation between behavioral performance and fMRI activation for the TPJ. Our data thus argue for a significant role of the TPJ in human global Gestalt perception.
Weaver, Timothy D; Gunz, Philipp
2018-04-01
Researchers studying extant and extinct taxa are often interested in identifying the evolutionary processes that have lead to the morphological differences among the taxa. Ideally, one could distinguish the influences of neutral evolutionary processes (genetic drift, mutation) from natural selection, and in situations for which selection is implicated, identify the targets of selection. The directional selection gradient is an effective tool for investigating evolutionary process, because it can relate form (size and shape) differences between taxa to the variation and covariation found within taxa. However, although most modern morphometric analyses use the tools of geometric morphometrics (GM) to analyze landmark data, to date, selection gradients have mainly been calculated from linear measurements. To address this methodological gap, here we present a GM approach for visualizing and comparing between-taxon selection gradients with each other, associated difference vectors, and "selection" gradients from neutral simulations. To exemplify our approach, we use a dataset of 347 three-dimensional landmarks and semilandmarks recorded on the crania of 260 primate specimens (112 humans, 67 common chimpanzees, 36 bonobos, 45 gorillas). Results on this example dataset show how incorporating geometric information can provide important insights into the evolution of the human braincase, and serve to demonstrate the utility of our approach for understanding morphological evolution. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.
Limanowski, Jakub; Blankenburg, Felix
2016-03-02
The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual information. These brain areas thus likely integrate visual and proprioceptive information into a flexible multisensory body representation. Copyright © 2016 the authors 0270-6474/16/362582-08$15.00/0.
Washburne, Alex D; Silverman, Justin D; Leff, Jonathan W; Bennett, Dominic J; Darcy, John L; Mukherjee, Sayan; Fierer, Noah; David, Lawrence A
2017-01-01
Marker gene sequencing of microbial communities has generated big datasets of microbial relative abundances varying across environmental conditions, sample sites and treatments. These data often come with putative phylogenies, providing unique opportunities to investigate how shared evolutionary history affects microbial abundance patterns. Here, we present a method to identify the phylogenetic factors driving patterns in microbial community composition. We use the method, "phylofactorization," to re-analyze datasets from the human body and soil microbial communities, demonstrating how phylofactorization is a dimensionality-reducing tool, an ordination-visualization tool, and an inferential tool for identifying edges in the phylogeny along which putative functional ecological traits may have arisen.
Automatic identification of bacterial types using statistical imaging methods
NASA Astrophysics Data System (ADS)
Trattner, Sigal; Greenspan, Hayit; Tepper, Gapi; Abboud, Shimon
2003-05-01
The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.
Human image tracking technique applied to remote collaborative environments
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Suzuki, Gen
1993-10-01
To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.
The µ-opioid system promotes visual attention to faces and eyes.
Chelnokova, Olga; Laeng, Bruno; Løseth, Guro; Eikemo, Marie; Willoch, Frode; Leknes, Siri
2016-12-01
Paying attention to others' faces and eyes is a cornerstone of human social behavior. The µ-opioid receptor (MOR) system, central to social reward-processing in rodents and primates, has been proposed to mediate the capacity for affiliative reward in humans. We assessed the role of the human MOR system in visual exploration of faces and eyes of conspecifics. Thirty healthy males received a novel, bidirectional battery of psychopharmacological treatment (an MOR agonist, a non-selective opioid antagonist, or placebo, on three separate days). Eye-movements were recorded while participants viewed facial photographs. We predicted that the MOR system would promote visual exploration of faces, and hypothesized that MOR agonism would increase, whereas antagonism decrease overt attention to the information-rich eye region. The expected linear effect of MOR manipulation on visual attention to the stimuli was observed, such that MOR agonism increased while antagonism decreased visual exploration of faces and overt attention to the eyes. The observed effects suggest that the human MOR system promotes overt visual attention to socially significant cues, in line with theories linking reward value to gaze control and target selection. Enhanced attention to others' faces and eyes represents a putative behavioral mechanism through which the human MOR system promotes social interest. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
2008-07-02
CAPE CANAVERAL, Fla. –David Voci, NYIT MOCAP (Motion Capture) team co-director (seated at the workstation in the background) prepares to direct a motion capture session assisted by Kennedy Advanced Visualizations Environment staff led by Brad Lawrence (not pictured) and by Lora Ridgwell from United Space Alliance Human Factors (foreground, left). Ridgwell will help assemble the Orion Crew Module mockup. The motion tracking aims to improve efficiency of assembly processes and identify potential ergonomic risks for technicians assembling the mockup. The work is being performed in United Space Alliance's Human Engineering Modeling and Performance Lab in the RLV Hangar at NASA's Kennedy Space Center. Part of NASA's Constellation Program, the Orion spacecraft will return humans to the moon and prepare for future voyages to Mars and other destinations in our solar system.
Neuroimaging Evidence of a Bilateral Representation for Visually Presented Numbers.
Grotheer, Mareike; Herrmann, Karl-Heinz; Kovács, Gyula
2016-01-06
The clustered architecture of the brain for different visual stimulus categories is one of the most fascinating topics in the cognitive neurosciences. Interestingly, recent research suggests the existence of additional regions for newly acquired stimuli such as letters (letter form area; LFA; Thesen et al., 2012) and numbers (visual number form area; NFA; Shum et al., 2013). However, neuroimaging methods thus far have failed to visualize the NFA in healthy participants, likely due to fMRI signal dropout caused by the air/bone interface of the petrous bone (Shum et al., 2013). In the current study, we combined a 64-channel head coil with high spatial resolution, localized shimming, and liberal smoothing, thereby decreasing the signal dropout and increasing the temporal signal-to-noise ratio in the neighborhood of the NFA. We presented subjects with numbers, letters, false numbers, false letters, objects and their Fourier randomized versions. A group analysis showed significant activations in the inferior temporal gyrus at the previously proposed location of the NFA. Crucially, we found the NFA to be present in both hemispheres. Further, we could identify the NFA on the single-subject level in most of our participants. A detailed analysis of the response profile of the NFA in two separate experiments confirmed the whole-brain results since responses to numbers were significantly higher than to any other presented stimulus in both hemispheres. Our results show for the first time the existence and stimulus selectivity of the NFA in the healthy human brain. This fMRI study shows for the first time a cluster of neurons selective for visually presented numbers in healthy human adults. This visual number form area (NFA) was found in both hemispheres. Crucially, numbers have gained importance for humans too recently for neuronal specialization to be established by evolution. Therefore, investigations of this region will greatly advance our understanding of learning and plasticity in the brain. In addition, these results will aid our knowledge regarding related neurological illnesses (e.g., dyscalculia). To overcome the fMRI signal dropout in the neighborhood of the NFA, we combined high spatial resolution with liberal smoothing. We believe that this approach will be useful to the broad neuroimaging community. Copyright © 2016 the authors 0270-6474/16/360088-10$15.00/0.
Visual performance modeling in the human operator simulator
NASA Technical Reports Server (NTRS)
Strieb, M. I.
1979-01-01
A brief description of the history of the development of the human operator simulator (HOS) model is presented. Features of the HOS micromodels that impact on the obtainment of visual performance data are discussed along with preliminary details on a HOS pilot model designed to predict the results of visual performance workload data obtained through oculometer studies on pilots in real and simulated approaches and landings.
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
Influence of Immersive Human Scale Architectural Representation on Design Judgment
NASA Astrophysics Data System (ADS)
Elder, Rebecca L.
Unrealistic visual representation of architecture within our existing environments have lost all reference to the human senses. As a design tool, visual and auditory stimuli can be utilized to determine human's perception of design. This experiment renders varying building inputs within different sites, simulated with corresponding immersive visual and audio sensory cues. Introducing audio has been proven to influence the way a person perceives a space, yet most inhabitants rely strictly on their sense of vision to make design judgments. Though not as apparent, users prefer spaces that have a better quality of sound and comfort. Through a series of questions, we can begin to analyze whether a design is fit for both an acoustic and visual environment.
Pearce, Eiluned; Stringer, Chris; Dunbar, R. I. M.
2013-01-01
Previous research has identified morphological differences between the brains of Neanderthals and anatomically modern humans (AMHs). However, studies using endocasts or the cranium itself are limited to investigating external surface features and the overall size and shape of the brain. A complementary approach uses comparative primate data to estimate the size of internal brain areas. Previous attempts to do this have generally assumed that identical total brain volumes imply identical internal organization. Here, we argue that, in the case of Neanderthals and AMHs, differences in the size of the body and visual system imply differences in organization between the same-sized brains of these two taxa. We show that Neanderthals had significantly larger visual systems than contemporary AMHs (indexed by orbital volume) and that when this, along with their greater body mass, is taken into account, Neanderthals have significantly smaller adjusted endocranial capacities than contemporary AMHs. We discuss possible implications of differing brain organization in terms of social cognition, and consider these in the context of differing abilities to cope with fluctuating resources and cultural maintenance. PMID:23486442
Runtime visualization of the human arterial tree.
Insley, Joseph A; Papka, Michael E; Dong, Suchuan; Karniadakis, George; Karonis, Nicholas T
2007-01-01
Large-scale simulation codes typically execute for extended periods of time and often on distributed computational resources. Because these simulations can run for hours, or even days, scientists like to get feedback about the state of the computation and the validity of its results as it runs. It is also important that these capabilities be made available with little impact on the performance and stability of the simulation. Visualizing and exploring data in the early stages of the simulation can help scientists identify problems early, potentially avoiding a situation where a simulation runs for several days, only to discover that an error with an input parameter caused both time and resources to be wasted. We describe an application that aids in the monitoring and analysis of a simulation of the human arterial tree. The application provides researchers with high-level feedback about the state of the ongoing simulation and enables them to investigate particular areas of interest in greater detail. The application also offers monitoring information about the amount of data produced and data transfer performance among the various components of the application.
Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator
NASA Astrophysics Data System (ADS)
Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi
Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.
Human low vision image warping - Channel matching considerations
NASA Technical Reports Server (NTRS)
Juday, Richard D.; Smith, Alan T.; Loshin, David S.
1992-01-01
We are investigating the possibility that a video image may productively be warped prior to presentation to a low vision patient. This could form part of a prosthesis for certain field defects. We have done preliminary quantitative studies on some notions that may be valid in calculating the image warpings. We hope the results will help make best use of time to be spent with human subjects, by guiding the selection of parameters and their range to be investigated. We liken a warping optimization to opening the largest number of spatial channels between the pixels of an input imager and resolution cells in the visual system. Some important effects are not quantified that will require human evaluation, such as local 'squashing' of the image, taken as the ratio of eigenvalues of the Jacobian of the transformation. The results indicate that the method shows quantitative promise. These results have identified some geometric transformations to evaluate further with human subjects.
Bombeke, Klaas; Duthoo, Wout; Mueller, Sven C; Hopf, Jens-Max; Boehler, C Nico
2016-02-15
Controversy revolves around the question of whether psychological factors like attention and emotion can influence the initial feedforward response in primary visual cortex (V1). Although traditionally, the electrophysiological correlate of this response in humans (the C1 component) has been found to be unaltered by psychological influences, a number of recent studies have described attentional and emotional modulations. Yet, research into psychological effects on the feedforward V1 response has neglected possible direct contributions of concomitant pupil-size modulations, which are known to also occur under various conditions of attentional load and emotional state. Here we tested the hypothesis that such pupil-size differences themselves directly affect the feedforward V1 response. We report data from two complementary experiments, in which we used procedures that modulate pupil size without differences in attentional load or emotion while simultaneously recording pupil-size and EEG data. Our results confirm that pupil size indeed directly influences the feedforward V1 response, showing an inverse relationship between pupil size and early V1 activity. While it is unclear in how far this effect represents a functionally-relevant adaptation, it identifies pupil-size differences as an important modulating factor of the feedforward response of V1 and could hence represent a confounding variable in research investigating the neural influence of psychological factors on early visual processing. Copyright © 2015 Elsevier Inc. All rights reserved.
Behaviorally Relevant Abstract Object Identity Representation in the Human Parietal Cortex
Jeong, Su Keun
2016-01-01
The representation of object identity is fundamental to human vision. Using fMRI and multivoxel pattern analysis, here we report the representation of highly abstract object identity information in human parietal cortex. Specifically, in superior intraparietal sulcus (IPS), a region previously shown to track visual short-term memory capacity, we found object identity representations for famous faces varying freely in viewpoint, hairstyle, facial expression, and age; and for well known cars embedded in different scenes, and shown from different viewpoints and sizes. Critically, these parietal identity representations were behaviorally relevant as they closely tracked the perceived face-identity similarity obtained in a behavioral task. Meanwhile, the task-activated regions in prefrontal and parietal cortices (excluding superior IPS) did not exhibit such abstract object identity representations. Unlike previous studies, we also failed to observe identity representations in posterior ventral and lateral visual object-processing regions, likely due to the greater amount of identity abstraction demanded by our stimulus manipulation here. Our MRI slice coverage precluded us from examining identity representation in anterior temporal lobe, a likely region for the computing of identity information in the ventral region. Overall, we show that human parietal cortex, part of the dorsal visual processing pathway, is capable of holding abstract and complex visual representations that are behaviorally relevant. These results argue against a “content-poor” view of the role of parietal cortex in attention. Instead, the human parietal cortex seems to be “content rich” and capable of directly participating in goal-driven visual information representation in the brain. SIGNIFICANCE STATEMENT The representation of object identity (including faces) is fundamental to human vision and shapes how we interact with the world. Although object representation has traditionally been associated with human occipital and temporal cortices, here we show, by measuring fMRI response patterns, that a region in the human parietal cortex can robustly represent task-relevant object identities. These representations are invariant to changes in a host of visual features, such as viewpoint, and reflect an abstract level of representation that has not previously been reported in the human parietal cortex. Critically, these neural representations are behaviorally relevant as they closely track the perceived object identities. Human parietal cortex thus participates in the moment-to-moment goal-directed visual information representation in the brain. PMID:26843642
ERIC Educational Resources Information Center
Bidet-Ildei, Christel; Kitromilides, Elenitsa; Orliaguet, Jean-Pierre; Pavlova, Marina; Gentaz, Edouard
2014-01-01
In human newborns, spontaneous visual preference for biological motion is reported to occur at birth, but the factors underpinning this preference are still in debate. Using a standard visual preferential looking paradigm, 4 experiments were carried out in 3-day-old human newborns to assess the influence of translational displacement on perception…
Visual Requirements for Human Drivers and Autonomous Vehicles
DOT National Transportation Integrated Search
2016-03-01
Identification of published literature between 1995 and 2013, focusing on determining the quantity and quality of visual information needed under both driving modes (i.e., human and autonomous) to navigate the road safely, especially as it pertains t...
NASA's Current Evidence and Hypothesis for the Visual Impairment and Intracranial Pressure Risk
NASA Technical Reports Server (NTRS)
Otto, Christian A.; Norsk, Peter; Oubre, Cherie M.; Pass, Anastas F.; Tarver, William
2012-01-01
While 40 years of human spaceflight exploration has reported visual decrement to a certain extent in a subgroup of astronauts, recent data suggests that there is indeed a subset of crewmembers that experience refraction changes (hyperoptic shift), cotton wool spot formation, choroidal fold development, papilledema, optic nerve sheath distention and/or posterior globe flattening with varying degrees of severity and permanence. Pre and postflight ocular measures have identified a potential risk of permanent visual changes as a result of microgravity exposure, which has been defined as the Visual Impairment and Intracranial Pressure risk (VIIP). The combination of symptoms are referred to as the VIIP syndrome. It is thought that the ocular structural and optic nerve changes are caused by events precipitated by the cephalad fluid shift crewmembers experience during long-duration spaceflight. Three important systems, ocular, cardiovascular, and central nervous, seem to be involved in the development of symptoms, but the etiology is still under speculation. It is believed that some crewmembers are more susceptible to these changes due to genetic/anatomical predisposition or lifestyle (fitness) related factors. Future research will focus on determining the etiology of the VIIP syndrome and development of mechanisms to mitigate the spaceflight risk.
Virtual reality and 3D animation in forensic visualization.
Ma, Minhua; Zheng, Huiru; Lallie, Harjinder
2010-09-01
Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
Patterns and comparisons of human-induced changes in river flood impacts in cities
NASA Astrophysics Data System (ADS)
Clark, Stephanie; Sharma, Ashish; Sisson, Scott A.
2018-03-01
In this study, information extracted from the first global urban fluvial flood risk data set (Aqueduct) is investigated and visualized to explore current and projected city-level flood impacts driven by urbanization and climate change. We use a novel adaption of the self-organizing map (SOM) method, an artificial neural network proficient at clustering, pattern extraction, and visualization of large, multi-dimensional data sets. Prevalent patterns of current relationships and anticipated changes over time in the nonlinearly-related environmental and social variables are presented, relating urban river flood impacts to socioeconomic development and changing hydrologic conditions. Comparisons are provided between 98 individual cities. Output visualizations compare baseline and changing trends of city-specific exposures of population and property to river flooding, revealing relationships between the cities based on their relative map placements. Cities experiencing high (or low) baseline flood impacts on population and/or property that are expected to improve (or worsen), as a result of anticipated climate change and development, are identified and compared. This paper condenses and conveys large amounts of information through visual communication to accelerate the understanding of relationships between local urban conditions and global processes.
Late maturation of visual spatial integration in humans
Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György
1999-01-01
Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
NASA Technical Reports Server (NTRS)
Taylor, J. H.
1973-01-01
Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.
Rapid inverse planning for pressure-driven drug infusions in the brain.
Rosenbluth, Kathryn H; Martin, Alastair J; Mittermeyer, Stephan; Eschermann, Jan; Dickinson, Peter J; Bankiewicz, Krystof S
2013-01-01
Infusing drugs directly into the brain is advantageous to oral or intravenous delivery for large molecules or drugs requiring high local concentrations with low off-target exposure. However, surgeons manually planning the cannula position for drug delivery in the brain face a challenging three-dimensional visualization task. This study presents an intuitive inverse-planning technique to identify the optimal placement that maximizes coverage of the target structure while minimizing the potential for leakage outside the target. The technique was retrospectively validated using intraoperative magnetic resonance imaging of infusions into the striatum of non-human primates and into a tumor in a canine model and applied prospectively to upcoming human clinical trials.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Human lateral geniculate nucleus and visual cortex respond to screen flicker.
Krolak-Salmon, Pierre; Hénaff, Marie-Anne; Tallon-Baudry, Catherine; Yvert, Blaise; Guénot, Marc; Vighetto, Alain; Mauguière, François; Bertrand, Olivier
2003-01-01
The first electrophysiological study of the human lateral geniculate nucleus (LGN), optic radiation, striate, and extrastriate visual areas is presented in the context of presurgical evaluation of three epileptic patients (Patients 1, 2, and 3). Visual-evoked potentials to pattern reversal and face presentation were recorded with depth intracranial electrodes implanted stereotactically. For Patient 1, electrode anatomical registration, structural magnetic resonance imaging, and electrophysiological responses confirmed the location of two contacts in the geniculate body and one in the optic radiation. The first responses peaked approximately 40 milliseconds in the LGN in Patient 1 and 60 milliseconds in the V1/V2 complex in Patients 2 and 3. Moreover, steady state visual-evoked potentials evoked by the unperceived but commonly experienced video-screen flicker were recorded in the LGN, optic radiation, and V1/V2 visual areas. This study provides topographic and temporal propagation characteristics of steady state visual-evoked potentials along human visual pathways. We discuss the possible relationship between the oscillating signal recorded in subcortical and cortical areas and the electroencephalogram abnormalities observed in patients suffering from photosensitive epilepsy, particularly video-game epilepsy. The consequences of high temporal frequency visual stimuli delivered by ubiquitous video screens on epilepsy, headaches, and eyestrain must be considered.
Neural codes of seeing architectural styles
Choo, Heeyoung; Nasar, Jack L.; Nikrahei, Bardia; Walther, Dirk B.
2017-01-01
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people’s visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture. PMID:28071765
Neural codes of seeing architectural styles.
Choo, Heeyoung; Nasar, Jack L; Nikrahei, Bardia; Walther, Dirk B
2017-01-10
Images of iconic buildings, such as the CN Tower, instantly transport us to specific places, such as Toronto. Despite the substantial impact of architectural design on people's visual experience of built environments, we know little about its neural representation in the human brain. In the present study, we have found patterns of neural activity associated with specific architectural styles in several high-level visual brain regions, but not in primary visual cortex (V1). This finding suggests that the neural correlates of the visual perception of architectural styles stem from style-specific complex visual structure beyond the simple features computed in V1. Surprisingly, the network of brain regions representing architectural styles included the fusiform face area (FFA) in addition to several scene-selective regions. Hierarchical clustering of error patterns further revealed that the FFA participated to a much larger extent in the neural encoding of architectural styles than entry-level scene categories. We conclude that the FFA is involved in fine-grained neural encoding of scenes at a subordinate-level, in our case, architectural styles of buildings. This study for the first time shows how the human visual system encodes visual aspects of architecture, one of the predominant and longest-lasting artefacts of human culture.
How virtual reality works: illusions of vision in "real" and virtual environments
NASA Astrophysics Data System (ADS)
Stark, Lawrence W.
1995-04-01
Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.
Optical coherence tomography for the diagnosis of human otitis media
NASA Astrophysics Data System (ADS)
Cho, Nam Hyun; Jung, Unsang; Jang, Jeong Hun; Jung, Woonggyu; Kim, Jeehyun; Lee, Sang Heun; Boppart, Stephen A.
2013-05-01
We report the application of Optical Coherence Tomography (OCT) to various types of human cases of otitis media (OM). Whereas conventional diagnostic modalities for OM, including standard and pneumatic otoscopy, are limited to visualizing the surface information of the tympanic membrane (TM), OCT is able to effectively reveal the depth-resolved microstructural below the TM with a very high spatial resolution. With the potential advantage of using OCT for diagnosing different types of OM, we examined in-vivo the use of 840 nm wavelength, and OCT spectral domain OCT (SDOCT) techniques, in several human cases including normal ears, and ears with adhesive and effusion types of OM. Peculiar positions were identified in two-dimensional OCT images of abnormal TMs compared to images of a normal TM. Analysis of A-scan (axial depth-scans) data from these positions could successfully identify unique patterns for different constituents within effusions. These OCT images may not only be used for constructing a database for the diagnosis and classification of OM, but they may also demonstrate the feasibility and advantages for upgrading the current otoscopy techniques.
Barber, Daniel J; Reinerman-Jones, Lauren E; Matthews, Gerald
2015-05-01
Two experiments were performed to investigate the feasibility for robot-to-human communication of a tactile language using a lexicon of standardized tactons (tactile icons) within a sentence. Improvements in autonomous systems technology and a growing demand within military operations are spurring interest in communication via vibrotactile displays. Tactile communication may become an important element of human-robot interaction (HRI), but it requires the development of messaging capabilities approaching the communication power of the speech and visual signals used in the military. In Experiment 1 (N = 38), we trained participants to identify sets of directional, dynamic, and static tactons and tested performance and workload following training. In Experiment 2 (N = 76), we introduced an extended training procedure and tested participants' ability to correctly identify two-tacton phrases. We also investigated the impact of multitasking on performance and workload. Individual difference factors were assessed. Experiment 1 showed that participants found dynamic and static tactons difficult to learn, but the enhanced training procedure in Experiment 2 produced competency in performance for all tacton categories. Participants in the latter study also performed well on two-tacton phrases and when multitasking. However, some deficits in performance and elevation of workload were observed. Spatial ability predicted some aspects of performance in both studies. Participants may be trained to identify both single tactons and tacton phrases, demonstrating the feasibility of developing a tactile language for HRI. Tactile communication may be incorporated into multi-modal communication systems for HRI. It also has potential for human-human communication in challenging environments. © 2014, Human Factors and Ergonomics Society.
Effects of selection for cooperation and attention in dogs.
Gácsi, Márta; McGreevy, Paul; Kara, Edina; Miklósi, Adám
2009-07-24
It has been suggested that the functional similarities in the socio-cognitive behaviour of dogs and humans emerged as a consequence of comparable environmental selection pressures. Here we use a novel approach to account for the facilitating effect of domestication in dogs and reveal that selection for two factors under genetic influence (visual cooperation and focused attention) may have led independently to increased comprehension of human communicational cues. In Study 1, we observed the performance of three groups of dogs in utilizing the human pointing gesture in a two-way object choice test. We compared breeds selected to work while visually separated from human partners (N = 30, 21 breeds, clustered as independent worker group), with those selected to work in close cooperation and continuous visual contact with human partners (N = 30, 22 breeds, clustered as cooperative worker group), and with a group of mongrels (N = 30).Secondly, it has been reported that, in dogs, selective breeding to produce an abnormal shortening of the skull is associated with a more pronounced area centralis (location of greatest visual acuity). In Study 2, breeds with high cephalic index and more frontally placed eyes (brachycephalic breeds, N = 25, 14 breeds) were compared with breeds with low cephalic index and laterally placed eyes (dolichocephalic breeds, N = 25, 14 breeds). In Study 1, cooperative workers were significantly more successful in utilizing the human pointing gesture than both the independent workers and the mongrels.In study 2, we found that brachycephalic dogs performed significantly better than dolichocephalic breeds. After controlling for environmental factors, we have provided evidence that at least two independent phenotypic traits with certain genetic variability affect the ability of dogs to rely on human visual cues. This finding should caution researchers against making simple generalizations about the effects of domestication and on dog-wolf differences in the utilization of human visual signals.
Ladd, Bryan M; Tackla, Ryan D; Gupte, Akshay; Darrow, David; Sorenson, Jeffery; Zuccarello, Mario; Grande, Andrew W
2017-03-01
Our pilot study evaluated the effectiveness of our telementoring-telescripting model to facilitate seamless communication between surgeons while the operating surgeon is using a microscope. As a first proof of concept, 4 students identified 20 anatomic landmarks on a dry human skull with or without telementoring guidance. To assess the ability to communicate operative information, a senior neurosurgery resident evaluated the student's ability and timing to complete a stepwise craniotomy on a cadaveric head, with and without telementoring guidance; a second portion included exposure of the anterior circulation. The mentor was able to annotate directly onto the operator's visual field, which was visible to the operator without looking away from the binocular view. The students showed that they were familiar with half (50% ± 10%) of the structures for identification and none was familiar with the steps to complete a craniotomy before using our system. With the guidance of a remote surgeon projected into the visual field of the microscope, the students were able to correctly identify 100% of the structures and complete a craniotomy. Our system also proved effective in guiding a more experienced neurosurgery resident through complex operative steps associated with exposure of the anterior circulation. Our pilot study showed a platform feasible in providing effective operative direction to inexperienced operators while operating using a microscope. A remote mentor was able to view the visual field of the microscope, annotate on the visual stream, and have the annotated stream appear in the binocular view for the operating mentee. Copyright © 2016 Elsevier Inc. All rights reserved.
Detection and identification of human targets in radar data
NASA Astrophysics Data System (ADS)
Gürbüz, Sevgi Z.; Melvin, William L.; Williams, Douglas B.
2007-04-01
Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection. Many situations, especially military applications, prevent the placement of video cameras or implantment seismic sensors in the area being observed, because of security or other threats. However, radar can operate far away from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In this paper, we examine the problem of human target detection and identification using single-channel, airborne, synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not just to identify targets as human, but also to determine features about the human target being observed, such as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based detection and identification algorithms. Simulations show that spectrograms have some ability to detect and identify human targets in low noise. An example gender discrimination system correctly detected 83.97% of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate detection and identification method that will be used as a basis for future work is proposed.
Noninvasive studies of human visual cortex using neuromagnetic techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aine, C.J.; George, J.S.; Supek, S.
1990-01-01
The major goals of noninvasive studies of the human visual cortex are: to increase knowledge of the functional organization of cortical visual pathways; and to develop noninvasive clinical tests for the assessment of cortical function. Noninvasive techniques suitable for studies of the structure and function of human visual cortex include magnetic resonance imaging (MRI), positron emission tomography (PET), single photon emission tomography (SPECT), scalp recorded event-related potentials (ERPs), and event-related magnetic fields (ERFs). The primary challenge faced by noninvasive functional measures is to optimize the spatial and temporal resolution of the measurement and analytic techniques in order to effectively characterizemore » the spatial and temporal variations in patterns of neuronal activity. In this paper we review the use of neuromagnetic techniques for this purpose. 8 refs., 3 figs.« less
Cognitive issues in searching images with visual queries
NASA Astrophysics Data System (ADS)
Yu, ByungGu; Evens, Martha W.
1999-01-01
In this paper, we propose our image indexing technique and visual query processing technique. Our mental images are different from the actual retinal images and many things, such as personal interests, personal experiences, perceptual context, the characteristics of spatial objects, and so on, affect our spatial perception. These private differences are propagated into our mental images and so our visual queries become different from the real images that we want to find. This is a hard problem and few people have tried to work on it. In this paper, we survey the human mental imagery system, the human spatial perception, and discuss several kinds of visual queries. Also, we propose our own approach to visual query interpretation and processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A
Interactive data visualization leverages human visual perception and cognition to improve the accuracy and effectiveness of data analysis. When combined with automated data analytics, data visualization systems orchestrate the strengths of humans with the computational power of machines to solve problems neither approach can manage in isolation. In the intelligent transportation system domain, such systems are necessary to support decision making in large and complex data streams. In this chapter, we provide an introduction to several key topics related to the design of data visualization systems. In addition to an overview of key techniques and strategies, we will describe practicalmore » design principles. The chapter is concluded with a detailed case study involving the design of a multivariate visualization tool.« less
Guidance of visual attention by semantic information in real-world scenes
Wu, Chia-Chien; Wick, Farahnaz Ahmed; Pomplun, Marc
2014-01-01
Recent research on attentional guidance in real-world scenes has focused on object recognition within the context of a scene. This approach has been valuable for determining some factors that drive the allocation of visual attention and determine visual selection. This article provides a review of experimental work on how different components of context, especially semantic information, affect attentional deployment. We review work from the areas of object recognition, scene perception, and visual search, highlighting recent studies examining semantic structure in real-world scenes. A better understanding on how humans parse scene representations will not only improve current models of visual attention but also advance next-generation computer vision systems and human-computer interfaces. PMID:24567724
ERIC Educational Resources Information Center
Forzano, Lori-Ann B.; Chelonis, John J.; Casey, Caitlin; Forward, Marion; Stachowiak, Jacqueline A.; Wood, Jennifer
2010-01-01
Self-control can be defined as the choice of a larger, more delayed reinforcer over a smaller, less delayed reinforcer, and impulsiveness as the opposite. Previous research suggests that exposure to visual food cues affects adult humans' self-control. Previous research also suggests that food deprivation decreases adult humans' self-control. The…
Lee, Choong‐Hee; Ryu, Jungwon; Lee, Sang‐Hun; Kim, Hakjin
2016-01-01
ABSTRACT The hippocampus plays critical roles in both object‐based event memory and spatial navigation, but it is largely unknown whether the left and right hippocampi play functionally equivalent roles in these cognitive domains. To examine the hemispheric symmetry of human hippocampal functions, we used an fMRI scanner to measure BOLD activity while subjects performed tasks requiring both object‐based event memory and spatial navigation in a virtual environment. Specifically, the subjects were required to form object‐place paired associate memory after visiting four buildings containing discrete objects in a virtual plus maze. The four buildings were visually identical, and the subjects used distal visual cues (i.e., scenes) to differentiate the buildings. During testing, the subjects were required to identify one of the buildings when cued with a previously associated object, and when shifted to a random place, the subject was expected to navigate to the previously chosen building. We observed that the BOLD activity foci changed from the left hippocampus to the right hippocampus as task demand changed from identifying a previously seen object (object‐cueing period) to searching for its paired‐associate place (object‐cued place recognition period). Furthermore, the efficient retrieval of object‐place paired associate memory (object‐cued place recognition period) was correlated with the BOLD response of the left hippocampus, whereas the efficient retrieval of relatively pure spatial memory (spatial memory period) was correlated with the right hippocampal BOLD response. These findings suggest that the left and right hippocampi in humans might process qualitatively different information for remembering episodic events in space. © 2016 The Authors Hippocampus Published by Wiley Periodicals, Inc. PMID:27009679
Lack of oblique astigmatism in the chicken eye.
Maier, Felix M; Howland, Howard C; Ohlendorf, Arne; Wahl, Siegfried; Schaeffel, Frank
2015-04-01
Primate eyes display considerable oblique off-axis astigmatism which could provide information on the sign of defocus that is needed for emmetropization. The pattern of peripheral astigmatism is not known in the chicken eye, a common model of myopia. Peripheral astigmatism was mapped out over the horizontal visual field in three chickens, 43 days old, and in three near emmetropic human subjects, average age 34.7years, using infrared photoretinoscopy. There were no differences in astigmatism between humans and chickens in the central visual field (chicks -0.35D, humans -0.65D, n.s.) but large differences in the periphery (i.e. astigmatism at 40° in the temporal visual field: humans -4.21D, chicks -0.63D, p<0.001, unpaired t-test). The lack of peripheral astigmatism in chicks was not due to differences in corneal shape. Perhaps related to their superior peripheral optics, we found that chickens had excellent visual performance also in the far periphery. Using an automated optokinetic nystagmus paradigm, no difference was observed in spatial visual performance with vision restricted to either the central 67° of the visual field or to the periphery beyond 67°. Accommodation was elicited by stimuli presented far out in the visual field. Transscleral images of single infrared LEDs showed no sign of peripheral astigmatism. The chick may be the first terrestrial vertebrate described to lack oblique astigmatism. Since corneal shape cannot account for the difference in astigmatism in humans and chicks, it must trace back to the design of the crystalline lens. The lack of peripheral astigmatism in chicks also excludes a role in emmetropization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Balaram, Pooja; Hackett, Troy A.; Kaas, Jon H.
2013-01-01
Glutamate is the primary neurotransmitter utilized by the mammalian visual system for excitatory neurotransmission. The sequestration of glutamate into synaptic vesicles, and the subsequent transport of filled vesicles to the presynaptic terminal membrane, is regulated by a family of proteins known as vesicular glutamate transporters (VGLUTs). Two VGLUT proteins, VGLUT1 and VGLUT2, characterize distinct sets of glutamatergic projections between visual structures in rodents and prosimian primates, yet little is known about their distributions in the visual system of anthropoid primates. We have examined the mRNA and protein expression patterns of VGLUT1 and VGLUT2 in the visual system of macaque monkeys, an Old World anthropoid primate, in order to determine their relative distributions in the superior colliculus, lateral geniculate nucleus, pulvinar complex, V1 and V2. Distinct expression patterns for both VGLUT1 and VGLUT2 identified architectonic boundaries in all structures, as well as anatomical subdivisions of the superior colliculus, pulvinar complex, and V1. These results suggest that VGLUT1 and VGLUT2 clearly identify regions of glutamatergic input in visual structures, and may identify common architectonic features of visual areas and nuclei across the primate radiation. Additionally, we find that VGLUT1 and VGLUT2 characterize distinct subsets of glutamatergic projections in the macaque visual system; VGLUT2 predominates in driving or feedforward projections from lower order to higher order visual structures while VGLUT1 predominates in modulatory or feedback projections from higher order to lower order visual structures. The distribution of these two proteins suggests that VGLUT1 and VGLUT2 may identify class 1 and class 2 type glutamatergic projections within the primate visual system (Sherman and Guillery, 2006). PMID:23524295
Balaram, Pooja; Hackett, Troy A; Kaas, Jon H
2013-05-01
Glutamate is the primary neurotransmitter utilized by the mammalian visual system for excitatory neurotransmission. The sequestration of glutamate into synaptic vesicles, and the subsequent transport of filled vesicles to the presynaptic terminal membrane, is regulated by a family of proteins known as vesicular glutamate transporters (VGLUTs). Two VGLUT proteins, VGLUT1 and VGLUT2, characterize distinct sets of glutamatergic projections between visual structures in rodents and prosimian primates, yet little is known about their distributions in the visual system of anthropoid primates. We have examined the mRNA and protein expression patterns of VGLUT1 and VGLUT2 in the visual system of macaque monkeys, an Old World anthropoid primate, in order to determine their relative distributions in the superior colliculus, lateral geniculate nucleus, pulvinar complex, V1 and V2. Distinct expression patterns for both VGLUT1 and VGLUT2 identified architectonic boundaries in all structures, as well as anatomical subdivisions of the superior colliculus, pulvinar complex, and V1. These results suggest that VGLUT1 and VGLUT2 clearly identify regions of glutamatergic input in visual structures, and may identify common architectonic features of visual areas and nuclei across the primate radiation. Additionally, we find that VGLUT1 and VGLUT2 characterize distinct subsets of glutamatergic projections in the macaque visual system; VGLUT2 predominates in driving or feedforward projections from lower order to higher order visual structures while VGLUT1 predominates in modulatory or feedback projections from higher order to lower order visual structures. The distribution of these two proteins suggests that VGLUT1 and VGLUT2 may identify class 1 and class 2 type glutamatergic projections within the primate visual system (Sherman and Guillery, 2006). Copyright © 2013 Elsevier B.V. All rights reserved.
Visual Image Sensor Organ Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.
2014-01-01
This innovation is a system that augments human vision through a technique called "Sensing Super-position" using a Visual Instrument Sensory Organ Replacement (VISOR) device. The VISOR device translates visual and other sensors (i.e., thermal) into sounds to enable very difficult sensing tasks. Three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. Because the human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns, the translation of images into sounds reduces the risk of accidentally filtering out important clues. The VISOR device was developed to augment the current state-of-the-art head-mounted (helmet) display systems. It provides the ability to sense beyond the human visible light range, to increase human sensing resolution, to use wider angle visual perception, and to improve the ability to sense distances. It also allows compensation for movement by the human or changes in the scene being viewed.
Visual preference in a human-reared agile gibbon (Hylobates agilis).
Tanaka, Masayuki; Uchikoshi, Makiko
2010-01-01
Visual preference was evaluated in a male agile gibbon. The subject was raised by humans immediately after birth, but lived with his biological family from one year of age. Visual preference was assessed using a free-choice task in which five or six photographs of different primate species, including humans, were presented on a touch-sensitive screen. The subject touched one of them. Food rewards were delivered irrespective of the subject's responses. We prepared two types of stimulus sets. With set 1, the subject touched photographs of humans more frequently than those of other species, recalling previous findings in human-reared chimpanzees. With set 2, photographs of nine species of gibbons were presented. Chimpanzees touched photographs of white-handed gibbons more than those of other gibbon species. The gibbon subject initially touched photographs of agile gibbons more than white-handed gibbons, but after one and two years his choice patterns resembled the chimpanzees'. The results suggest that, as in chimpanzees, visual preferences of agile gibbons are not genetically programmed but develop through social experience during infancy.
ERIC Educational Resources Information Center
Fischer, Quentin S.; Aleem, Salman; Zhou, Hongyi; Pham, Tony A.
2007-01-01
Prolonged visual deprivation from early childhood to maturity is believed to cause permanent visual impairment. However, there have been case reports of substantial improvement of binocular vision in human adults following lifelong visual impairment or deprivation. These observations, together with recent findings of adult ocular dominance…
Exploration of (hetero)aryl derived thienylchalcones for antiviral and anticancer activities.
Patil, Vikrant; Patil, Siddappa A; Patil, Renukadevi; Bugarin, Alejandro; Beaman, Kenneth; Patil, Shivaputra A
2018-05-23
Search for new antiviral and anticancer agents are essential because of the emergence of drug resistance in recent years. In continuation of our efforts in identifying the new small molecule antiviral and anticancer agents, we identified chalcones as potent antiviral and anticancer agents. With the aim of identifying the broad acting antiviral and anticancer agents, we discovered substituted aryl/heteroaryl derived thienyl chalcones as antiviral and anticancer agents. A focused set of thienyl chalcone derivaties II-VI was screened for selected viruses Hepatitis B virus (HBV), Herpes simplex virus 1 (HSV-1), Human cytomegalovirus (HCMV), Dengue virus 2 (DENV2), Influenza A (H1N1) virus, MERS coronavirus, Poliovirus 1 (PV 1), Rift Valley fever (RVF), Tacaribe virus (TCRV), Venezuelan equine encephalitis virus (VEE) and Zika virus (ZIKV) using the National Institute of Allergy and Infectious Diseases (NIAID)'s Division of Microbiology and Infectious Diseases (DMID) antiviral screening program. Additionally, a cyclopropylquinoline derivative IV has been screened for 60 human cancer cell lines using the Development Therapeutics Program (DTP) of NCI. All thienyl chalcone derivatives II-VI displayed moderate to excellent antiviral activity towards several viruses tested. Compounds V and VI were turned out be active compounds towards human cytomegalovirus for both normal strain (AD169) as well as resistant isolate (GDGr K17). Particularly, cyano derivative V showed very high potency (EC50: <0.05 µM) towards AD169 strain of HCMV compared to standard drug Ganciclovir (EC50: 0.12 µM). Additionally, it showed moderate activity in the secondary assay (AD169; EC50: 2.30 µM). The cyclopropylquinoline derivative IV displayed high potency towards Rift Valley fever virus (RVFV) and Tacaribe virus (TCRV). The cyclopropylquinoline derivative IV is nearly 28 times more potent in our initial in vitro visual assay (EC50: 0.39 μg/ml) and nearly 17 times more potent in neutral red assay (EC50: 0.71 μg/ml) compared to the standard drug Ribavirin (EC50: 11 μg/ml; visual assay and EC50: 12 μg/ml; neutral red assay). It is nearly 12 times more potent in our initial in vitro visual assay (EC50: >1 μg/ml) and nearly 8 times more potent in neutral red assay (EC50: >1.3 μg/ml) compared to the standard drug Ribavirin (EC50: 12 μg/ml; visual assay and EC50: 9.9 μg/ml; neutral red assay) towards Tacaribe virus (TCRV). Additionally, cyclopropylquinoline derivative IV has shown strong growth inhibitory activity towards three major cancer (colon, breast, and leukemia) cell lines and moderate growth inhibition shown towards other cancer cell lines screened. Compounds V and VI were demonstrated viral inhibition towards Human cytomegalovirus, whereas cyclopropylquinoline derivative IV towards Rift Valley fever virus and Tacaribe virus. Additionally, cyclopropylquinoline derivative IV has displayed very good cytotoxicity against colon, breast and leukemia cell lines in vitro. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Observer performance in semi-automated microbleed detection
NASA Astrophysics Data System (ADS)
Kuijf, Hugo J.; Brundel, Manon; de Bresser, Jeroen; Viergever, Max A.; Biessels, Geert Jan; Geerlings, Mirjam I.; Vincken, Koen L.
2013-03-01
Cerebral microbleeds are small bleedings in the human brain, detectable with MRI. Microbleeds are associated with vascular disease and dementia. The number of studies involving microbleed detection is increasing rapidly. Visual rating is the current standard for detection, but is a time-consuming process, especially at high-resolution 7.0 T MR images, has limited reproducibility and is highly observer dependent. Recently, multiple techniques have been published for the semi-automated detection of microbleeds, attempting to overcome these problems. In the present study, a 7.0 T dual-echo gradient echo MR image was acquired in 18 participants with microbleeds from the SMART study. Two experienced observers identified 54 microbleeds in these participants, using a validated visual rating scale. The radial symmetry transform (RST) can be used for semi-automated detection of microbleeds in 7.0 T MR images. In the present study, the results of the RST were assessed by two observers and 47 microbleeds were identified: 35 true positives and 12 extra positives (microbleeds that were missed during visual rating). Hence, after scoring a total number of 66 microbleeds could be identified in the 18 participants. The use of the RST increased the average sensitivity of observers from 59% to 69%. More importantly, inter-observer agreement (ICC and Dice's coefficient) increased from 0.85 and 0.64 to 0.98 and 0.96, respectively. Furthermore, the required rating time was reduced from 30 to 2 minutes per participant. By fine-tuning the RST, sensitivities up to 90% can be achieved, at the cost of extra false positives.
ERIC Educational Resources Information Center
van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.
2017-01-01
Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…
Dogs respond appropriately to cues of humans' attentional focus.
Virányi, Zsófia; Topál, József; Gácsi, Márta; Miklósi, Adám; Csányi, Vilmos
2004-05-31
Dogs' ability to recognise cues of human visual attention was studied in different experiments. Study 1 was designed to test the dogs' responsiveness to their owner's tape-recorded verbal commands (Down!) while the Instructor (who was the owner of the dog) was facing either the dog or a human partner or none of them, or was visually separated from the dog. Results show that dogs were more ready to follow the command if the Instructor attended them during instruction compared to situations when the Instructor faced the human partner or was out of sight of the dog. Importantly, however, dogs showed intermediate performance when the Instructor was orienting into 'empty space' during the re-played verbal commands. This suggests that dogs are able to differentiate the focus of human attention. In Study 2 the same dogs were offered the possibility to beg for food from two unfamiliar humans whose visual attention (i.e. facing the dog or turning away) was systematically varied. The dogs' preference for choosing the attentive person shows that dogs are capable of using visual cues of attention to evaluate the human actors' responsiveness to solicit food-sharing. The dogs' ability to understand the communicatory nature of the situations is discussed in terms of their social cognitive skills and unique evolutionary history.
Violante, Inês R; Ribeiro, Maria J; Cunha, Gil; Bernardino, Inês; Duarte, João V; Ramos, Fabiana; Saraiva, Jorge; Silva, Eduardo; Castelo-Branco, Miguel
2012-01-01
Neurofibromatosis type 1 (NF1) is one of the most common single gene disorders affecting the human nervous system with a high incidence of cognitive deficits, particularly visuospatial. Nevertheless, neurophysiological alterations in low-level visual processing that could be relevant to explain the cognitive phenotype are poorly understood. Here we used functional magnetic resonance imaging (fMRI) to study early cortical visual pathways in children and adults with NF1. We employed two distinct stimulus types differing in contrast and spatial and temporal frequencies to evoke relatively different activation of the magnocellular (M) and parvocellular (P) pathways. Hemodynamic responses were investigated in retinotopically-defined regions V1, V2 and V3 and then over the acquired cortical volume. Relative to matched control subjects, patients with NF1 showed deficient activation of the low-level visual cortex to both stimulus types. Importantly, this finding was observed for children and adults with NF1, indicating that low-level visual processing deficits do not ameliorate with age. Moreover, only during M-biased stimulation patients with NF1 failed to deactivate or even activated anterior and posterior midline regions of the default mode network. The observation that the magnocellular visual pathway is impaired in NF1 in early visual processing and is specifically associated with a deficient deactivation of the default mode network may provide a neural explanation for high-order cognitive deficits present in NF1, particularly visuospatial and attentional. A link between magnocellular and default mode network processing may generalize to neuropsychiatric disorders where such deficits have been separately identified.
Overview of Human-Centric Space Situational Awareness (SSA) Science and Technology (S&T)
NASA Astrophysics Data System (ADS)
Ianni, J.; Aleva, D.; Ellis, S.
2012-09-01
A number of organizations, within the government, industry, and academia, are researching ways to help humans understand and react to events in space. The problem is both helped and complicated by the fact that there are numerous data sources that need to be planned (i.e., tasked), collected, processed, analyzed, and disseminated. A large part of the research is in support of the Joint Space Operational Center (JSpOC), National Air and Space Intelligence Center (NASIC), and similar organizations. Much recent research has been specifically targeting the JSpOC Mission System (JMS) which has provided a unifying software architecture. This paper will first outline areas of science and technology (S&T) related to human-centric space situational awareness (SSA) and space command and control (C2) including: 1. Object visualization - especially data fused from disparate sources. Also satellite catalog visualizations that convey the physical relationships between space objects. 2. Data visualization - improve data trend analysis as in visual analytics and interactive visualization; e.g., satellite anomaly trends over time, space weather visualization, dynamic visualizations 3. Workflow support - human-computer interfaces that encapsulate multiple computer services (i.e., algorithms, programs, applications) into a 4. Command and control - e.g., tools that support course of action (COA) development and selection, tasking for satellites and sensors, etc. 5. Collaboration - improve individuals or teams ability to work with others; e.g., video teleconferencing, shared virtual spaces, file sharing, virtual white-boards, chat, and knowledge search. 6. Hardware/facilities - e.g., optimal layouts for operations centers, ergonomic workstations, immersive displays, interaction technologies, and mobile computing. Secondly we will provide a survey of organizations working these areas and suggest where more attention may be needed. Although no detailed master plan exists for human-centric SSA and C2, we see little redundancy among the groups supporting SSA human factors at this point.
Kawchuk, Gregory N; Hartvigsen, Jan; Edgecombe, Tiffany; Prasad, Narasimha; van Dieen, Jaap H
2016-03-11
Structural health monitoring (SHM) is an engineering technique used to identify mechanical abnormalities not readily apparent through other means. Recently, SHM has been adapted for use in biological systems, but its invasive nature limits its clinical application. As such, the purpose of this project was to determine if a non-invasive form of SHM could identify structural alterations in the spines of living human subjects. Lumbar spines of 10 twin pairs were visualized by magnetic resonance imaging then assessed by a blinded radiologist to determine whether twin pairs were structurally concordant or discordant. Vibration was then applied to each subject's spine and the resulting response recorded from sensors overlying lumbar spinous processes. The peak frequency, area under the curve and the root mean square were computed from the frequency response function of each sensor. Statistical analysis demonstrated that in twins whose structural appearance was discordant, peak frequency was significantly different between twin pairs while in concordant twins, no outcomes were significantly different. From these results, we conclude that structural changes within the spine can alter its vibration response. As such, further investigation of SHM to identify spinal abnormalities in larger human populations is warranted.
Spatial attention increases high-frequency gamma synchronisation in human medial visual cortex.
Koelewijn, Loes; Rich, Anina N; Muthukumaraswamy, Suresh D; Singh, Krish D
2013-10-01
Visual information processing involves the integration of stimulus and goal-driven information, requiring neuronal communication. Gamma synchronisation is linked to neuronal communication, and is known to be modulated in visual cortex both by stimulus properties and voluntarily-directed attention. Stimulus-driven modulations of gamma activity are particularly associated with early visual areas such as V1, whereas attentional effects are generally localised to higher visual areas such as V4. The absence of a gamma increase in early visual cortex is at odds with robust attentional enhancements found with other measures of neuronal activity in this area. Here we used magnetoencephalography (MEG) to explore the effect of spatial attention on gamma activity in human early visual cortex using a highly effective gamma-inducing stimulus and strong attentional manipulation. In separate blocks, subjects tracked either a parafoveal grating patch that induced gamma activity in contralateral medial visual cortex, or a small line at fixation, effectively attending away from the gamma-inducing grating. Both items were always present, but rotated unpredictably and independently of each other. The rotating grating induced gamma synchronisation in medial visual cortex at 30-70 Hz, and in lateral visual cortex at 60-90 Hz, regardless of whether it was attended. Directing spatial attention to the grating increased gamma synchronisation in medial visual cortex, but only at 60-90 Hz. These results suggest that the generally found increase in gamma activity by spatial attention can be localised to early visual cortex in humans, and that stimulus and goal-driven modulations may be mediated at different frequencies within the gamma range. Copyright © 2013 Elsevier Inc. All rights reserved.
Occipital White Matter Tracts in Human and Macaque
Takemura, Hiromasa; Pestilli, Franco; Weiner, Kevin S.; Landi, Sofia M.; Sliwa, Julia; Ye, Frank Q.; Barnett, Michael A.; Leopold, David A.; Freiwald, Winrich A.; Logothetis, Nikos K.; Wandell, Brian A.
2017-01-01
Abstract We compare several major white-matter tracts in human and macaque occipital lobe using diffusion magnetic resonance imaging. The comparison suggests similarities but also significant differences in the tracts. There are several apparently homologous tracts in the 2 species, including the vertical occipital fasciculus (VOF), optic radiation, forceps major, and inferior longitudinal fasciculus (ILF). There is one large human tract, the inferior fronto-occipital fasciculus, with no corresponding fasciculus in macaque. We could identify the macaque VOF (mVOF), which has been little studied. Its position is consistent with classical invasive anatomical studies by Wernicke. VOF homology is supported by similarity of the endpoints in V3A and ventral V4 across species. The mVOF fibers intertwine with the dorsal segment of the ILF, but the human VOF appears to be lateral to the ILF. These similarities and differences between the occipital lobe tracts will be useful in establishing which circuitry in the macaque can serve as an accurate model for human visual cortex. PMID:28369290
Visual recovery in cortical blindness is limited by high internal noise
Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.
2015-01-01
Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544
Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields.
Stevens, Andrew H; Butkiewicz, Thomas; Ware, Colin
2017-01-01
Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations.
Suzuki, Naoki; Hattori, Asaki; Hashizume, Makoto
2016-01-01
We constructed a four dimensional human model that is able to visualize the structure of a whole human body, including the inner structures, in real-time to allow us to analyze human dynamic changes in the temporal, spatial and quantitative domains. To verify whether our model was generating changes according to real human body dynamics, we measured a participant's skin expansion and compared it to that of the model conducted under the same body movement. We also made a contribution to the field of orthopedics, as we were able to devise a display method that enables the observer to more easily observe the changes made in the complex skeletal muscle system during body movements, which in the past were difficult to visualize.
GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration
Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng
2015-01-01
The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315
DaVIE: Database for the Visualization and Integration of Epigenetic data
Fejes, Anthony P.; Jones, Meaghan J.; Kobor, Michael S.
2014-01-01
One of the challenges in the analysis of large data sets, particularly in a population-based setting, is the ability to perform comparisons across projects. This has to be done in such a way that the integrity of each individual project is maintained, while ensuring that the data are comparable across projects. These issues are beginning to be observed in human DNA methylation studies, as the Illumina 450k platform and next generation sequencing-based assays grow in popularity and decrease in price. This increase in productivity is enabling new insights into epigenetics, but also requires the development of pipelines and software capable of handling the large volumes of data. The specific problems inherent in creating a platform for the storage, comparison, integration, and visualization of DNA methylation data include data storage, algorithm efficiency and ability to interpret the results to derive biological meaning from them. Databases provide a ready-made solution to these issues, but as yet no tools exist that that leverage these advantages while providing an intuitive user interface for interpreting results in a genomic context. We have addressed this void by integrating a database to store DNA methylation data with a web interface to query and visualize the database and a set of libraries for more complex analysis. The resulting platform is called DaVIE: Database for the Visualization and Integration of Epigenetics data. DaVIE can use data culled from a variety of sources, and the web interface includes the ability to group samples by sub-type, compare multiple projects and visualize genomic features in relation to sites of interest. We have used DaVIE to identify patterns of DNA methylation in specific projects and across different projects, identify outlier samples, and cross-check differentially methylated CpG sites identified in specific projects across large numbers of samples. A demonstration server has been setup using GEO data at http://echelon.cmmt.ubc.ca/dbaccess/, with login “guest” and password “guest.” Groups may download and install their own version of the server following the instructions on the project's wiki. PMID:25278960
Hallgreen, Christine E; Mt-Isa, Shahrul; Lieftucht, Alfons; Phillips, Lawrence D; Hughes, Diana; Talbot, Susan; Asiimwe, Alex; Downey, Gerald; Genov, Georgy; Hermann, Richard; Noel, Rebecca; Peters, Ruth; Micaleff, Alain; Tzoulaki, Ioanna; Ashby, Deborah
2016-03-01
The PROTECT Benefit-Risk group is dedicated to research in methods for continuous benefit-risk monitoring of medicines, including the presentation of the results, with a particular emphasis on graphical methods. A comprehensive review was performed to identify visuals used for medical risk and benefit-risk communication. The identified visual displays were grouped into visual types, and each visual type was appraised based on five criteria: intended audience, intended message, knowledge required to understand the visual, unintentional messages that may be derived from the visual and missing information that may be needed to understand the visual. Sixty-six examples of visual formats were identified from the literature and classified into 14 visual types. We found that there is not one single visual format that is consistently superior to others for the communication of benefit-risk information. In addition, we found that most of the drawbacks found in the visual formats could be considered general to visual communication, although some appear more relevant to specific formats and should be considered when creating visuals for different audiences depending on the exact message to be communicated. We have arrived at recommendations for the use of visual displays for benefit-risk communication. The recommendation refers to the creation of visuals. We outline four criteria to determine audience-visual compatibility and consider these to be a key task in creating any visual. Next we propose specific visual formats of interest, to be explored further for their ability to address nine different types of benefit-risk analysis information. Copyright © 2015 John Wiley & Sons, Ltd.
Effectiveness of Video Demonstration over Conventional Methods in Teaching Osteology in Anatomy.
Viswasom, Angela A; Jobby, Abraham
2017-02-01
Technology and its applications are the most happening things in the world. So, is it in the field of medical education. This study was an evaluation of whether the conventional methods can compete with the test of technology. A comparative study of traditional method of teaching osteology in human anatomy with an innovative visual aided method. The study was conducted on 94 students admitted to MBBS 2014 to 2015 batch of Travancore Medical College. The students were divided into two academically validated groups. They were taught using conventional and video demonstrational techniques in a systematic manner. Post evaluation tests were conducted. Analysis of the mark pattern revealed that the group taught using traditional method scored better when compared to the visual aided method. Feedback analysis showed that, the students were able to identify bony features better with clear visualisation and three dimensional view when taught using the video demonstration method. The students identified visual aided method as the more interesting one for learning which helped them in applying the knowledge gained. In most of the questions asked, the two methods of teaching were found to be comparable on the same scale. As the study ends, we discover that, no new technique can be substituted for time tested techniques of teaching and learning. The ideal method would be incorporating newer multimedia techniques into traditional classes.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Yan, Shuai; Cui, Sishan; Ke, Kun; Zhao, Bixing; Liu, Xiaolong; Yue, Shuhua; Wang, Ping
2018-06-05
Lipid metabolism is dysregulated in human cancers. The analytical tools that could identify and quantitatively map metabolites in unprocessed human tissues with submicrometer resolution are highly desired. Here, we implemented analytical hyperspectral stimulated Raman scattering microscopy to map the lipid metabolites in situ in normal and cancerous liver tissues from 24 patients. In contrast to the conventional wisdom that unsaturated lipid accumulation enhances tumor cell survival and proliferation, we unexpectedly visualized substantial amount of saturated fat accumulated in cancerous liver tissues, which was not seen in majority of their adjacent normal tissues. Further analysis by mass spectrometry confirmed significant high levels of glyceryl tripalmitate specifically in cancerous liver. These findings suggest that the aberrantly accumulated saturated fat may have great potential to be a metabolic biomarker for liver cancer.
Tsunoda, Naoko; Hashimoto, Mamoru; Ishikawa, Tomohisa; Fukuhara, Ryuji; Yuki, Seiji; Tanaka, Hibiki; Hatada, Yutaka; Miyagawa, Yusuke; Ikeda, Manabu
2018-05-08
Auditory hallucinations are an important symptom for diagnosing dementia with Lewy bodies (DLB), yet they have received less attention than visual hallucinations. We investigated the clinical features of auditory hallucinations and the possible mechanisms by which they arise in patients with DLB. We recruited 124 consecutive patients with probable DLB (diagnosis based on the DLB International Workshop 2005 criteria; study period: June 2007-January 2015) from the dementia referral center of Kumamoto University Hospital. We used the Neuropsychiatric Inventory to assess the presence of auditory hallucinations, visual hallucinations, and other neuropsychiatric symptoms. We reviewed all available clinical records of patients with auditory hallucinations to assess their clinical features. We performed multiple logistic regression analysis to identify significant independent predictors of auditory hallucinations. Of the 124 patients, 44 (35.5%) had auditory hallucinations and 75 (60.5%) had visual hallucinations. The majority of patients (90.9%) with auditory hallucinations also had visual hallucinations. Auditory hallucinations consisted mostly of human voices, and 90% of patients described them as like hearing a soundtrack of the scene. Multiple logistic regression showed that the presence of auditory hallucinations was significantly associated with female sex (P = .04) and hearing impairment (P = .004). The analysis also revealed independent correlations between the presence of auditory hallucinations and visual hallucinations (P < .001), phantom boarder delusions (P = .001), and depression (P = .038). Auditory hallucinations are common neuropsychiatric symptoms in DLB and usually appear as a background soundtrack accompanying visual hallucinations. Auditory hallucinations in patients with DLB are more likely to occur in women and those with impaired hearing, depression, delusions, or visual hallucinations. © Copyright 2018 Physicians Postgraduate Press, Inc.
Visual Analytics of Surveillance Data on Foodborne Vibriosis, United States, 1973–2010
Sims, Jennifer N.; Isokpehi, Raphael D.; Cooper, Gabrielle A.; Bass, Michael P.; Brown, Shyretha D.; St John, Alison L.; Gulig, Paul A.; Cohly, Hari H.P.
2011-01-01
Foodborne illnesses caused by microbial and chemical contaminants in food are a substantial health burden worldwide. In 2007, human vibriosis (non-cholera Vibrio infections) became a notifiable disease in the United States. In addition, Vibrio species are among the 31 major known pathogens transmitted through food in the United States. Diverse surveillance systems for foodborne pathogens also track outbreaks, illnesses, hospitalization and deaths due to non-cholera vibrios. Considering the recognition of vibriosis as a notifiable disease in the United States and the availability of diverse surveillance systems, there is a need for the development of easily deployed visualization and analysis approaches that can combine diverse data sources in an interactive manner. Current efforts to address this need are still limited. Visual analytics is an iterative process conducted via visual interfaces that involves collecting information, data preprocessing, knowledge representation, interaction, and decision making. We have utilized public domain outbreak and surveillance data sources covering 1973 to 2010, as well as visual analytics software to demonstrate integrated and interactive visualizations of data on foodborne outbreaks and surveillance of Vibrio species. Through the data visualization, we were able to identify unique patterns and/or novel relationships within and across datasets regarding (i) causative agent; (ii) foodborne outbreaks and illness per state; (iii) location of infection; (iv) vehicle (food) of infection; (v) anatomical site of isolation of Vibrio species; (vi) patients and complications of vibriosis; (vii) incidence of laboratory-confirmed vibriosis and V. parahaemolyticus outbreaks. The additional use of emerging visual analytics approaches for interaction with data on vibriosis, including non-foodborne related disease, can guide disease control and prevention as well as ongoing outbreak investigations. PMID:22174586
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
SBCDDB: Sleeping Beauty Cancer Driver Database for gene discovery in mouse models of human cancers
Mann, Michael B
2018-01-01
Abstract Large-scale oncogenomic studies have identified few frequently mutated cancer drivers and hundreds of infrequently mutated drivers. Defining the biological context for rare driving events is fundamentally important to increasing our understanding of the druggable pathways in cancer. Sleeping Beauty (SB) insertional mutagenesis is a powerful gene discovery tool used to model human cancers in mice. Our lab and others have published a number of studies that identify cancer drivers from these models using various statistical and computational approaches. Here, we have integrated SB data from primary tumor models into an analysis and reporting framework, the Sleeping Beauty Cancer Driver DataBase (SBCDDB, http://sbcddb.moffitt.org), which identifies drivers in individual tumors or tumor populations. Unique to this effort, the SBCDDB utilizes a single, scalable, statistical analysis method that enables data to be grouped by different biological properties. This allows for SB drivers to be evaluated (and re-evaluated) under different contexts. The SBCDDB provides visual representations highlighting the spatial attributes of transposon mutagenesis and couples this functionality with analysis of gene sets, enabling users to interrogate relationships between drivers. The SBCDDB is a powerful resource for comparative oncogenomic analyses with human cancer genomics datasets for driver prioritization. PMID:29059366
Reduction of Complexity: An Aspect of Network Visualization
2006-12-01
research is to identify strategies for the visualization of network information. Distinction can be made between visual communication and visual...exploration (MacEachern 1994). Visual communication deals with how to visualize results of different kinds of analysis, i.e., visualization in the case
Salient sounds activate human visual cortex automatically.
McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A
2013-05-22
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.
Salient sounds activate human visual cortex automatically
McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.
2013-01-01
Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530
NASA Astrophysics Data System (ADS)
Qin, Jia; An, Lin; Wang, Ruikang
2011-03-01
Adequate functioning of the peripheral micro vascular in human skin is necessary to maintain optimal tissue perfusion and preserve normal hemodynamic function. There is a growing body of evidence suggests that vascular abnormalities may directly related to several dermatologic diseases, such as psoriasis, port-wine stain, skin cancer, etc. New in vivo imaging modalities to aid volumetric microvascular blood perfusion imaging are there for highly desirable. To address this need, we demonstrate the capability of ultra-high sensitive optical micro angiography to allow blood flow visualization and quantification of vascular densities of lesional psoriasis area in human subject in vivo. The microcirculation networks of lesion and non-lesion skin were obtained after post processing the data sets captured by the system. With our image resolution (~20 μm), we could compare these two types of microcirculation networks both qualitatively and quantitatively. The B-scan (lateral or x direction) cross section images, en-face (x-y plane) images and the volumetric in vivo perfusion map of lesion and non-lesion skin areas were obtained using UHS-OMAG. Characteristic perfusion map features were identified between lesional and non-lesional skin area. A statistically significant difference between vascular densities of lesion and non-lesion skin area was also found using a histogram based analysis. UHS-OMAG has the potential to differentiate the normal skin microcirculation from abnormal human skin microcirculation non-invasively with high speed and sensitivity. The presented data demonstrates the great potential of UHS-OMAG for detecting and diagnosing skin disease such as psoriasis in human subjects.
Sensitive periods for the functional specialization of the neural system for human face processing.
Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide
2013-10-15
The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.
Are New Image Quality Figures of Merit Needed for Flat Panel Displays?
1998-06-01
American National Standard for Human Factors Engineering of Visual Display Terminal Workstations in 1988 have adopted the MTFA as the standard...References American National Standard for Human Factors Engineering of Visual Display Terminal Workstations (ANSI/HFS 100-1988). 1988. Santa Monica
Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.
Xiong, Chunshui; Huang, Lei; Liu, Changping
2014-01-01
Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.
Fox, Christopher J; Barton, Jason J S
2007-01-05
The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.
Timing of target discrimination in human frontal eye fields.
O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent
2004-01-01
Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.
Infrared imaging of the crime scene: possibilities and pitfalls.
Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G
2013-09-01
All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. © 2013 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Goldston, M. Jenice; Nichols, Sharon
2009-04-01
This study situated in a Southern resegregated Black middle school involved four Black teachers and two White science educators’ use of photonarratives to envision culturally relevant science pedagogy. Two questions guided the study: (1) What community referents are important for conceptualizing culturally relevant practices in Black science classrooms? and (2) How do teachers’ photonarratives serve to open conversations and notions of culturally relevant science practices? The research methodologically drew upon memory-work, Black feminism, critical theory, visual methodology, and narrative inquiry as “portraiture.” Issues of positionality and identity proved to be central to this work, as three luminaries portray Black teachers’ insights about supports and barriers to teaching and learning science. The community referents identified were associated with church and its oral traditions, inequities of the market place in meeting their basic human needs, and community spaces.
Discovering Network Structure Beyond Communities
NASA Astrophysics Data System (ADS)
Nishikawa, Takashi; Motter, Adilson E.
2011-11-01
To understand the formation, evolution, and function of complex systems, it is crucial to understand the internal organization of their interaction networks. Partly due to the impossibility of visualizing large complex networks, resolving network structure remains a challenging problem. Here we overcome this difficulty by combining the visual pattern recognition ability of humans with the high processing speed of computers to develop an exploratory method for discovering groups of nodes characterized by common network properties, including but not limited to communities of densely connected nodes. Without any prior information about the nature of the groups, the method simultaneously identifies the number of groups, the group assignment, and the properties that define these groups. The results of applying our method to real networks suggest the possibility that most group structures lurk undiscovered in the fast-growing inventory of social, biological, and technological networks of scientific interest.
CardioTF, a database of deconstructing transcriptional circuits in the heart system
2016-01-01
Background: Information on cardiovascular gene transcription is fragmented and far behind the present requirements of the systems biology field. To create a comprehensive source of data for cardiovascular gene regulation and to facilitate a deeper understanding of genomic data, the CardioTF database was constructed. The purpose of this database is to collate information on cardiovascular transcription factors (TFs), position weight matrices (PWMs), and enhancer sequences discovered using the ChIP-seq method. Methods: The Naïve-Bayes algorithm was used to classify literature and identify all PubMed abstracts on cardiovascular development. The natural language learning tool GNAT was then used to identify corresponding gene names embedded within these abstracts. Local Perl scripts were used to integrate and dump data from public databases into the MariaDB management system (MySQL). In-house R scripts were written to analyze and visualize the results. Results: Known cardiovascular TFs from humans and human homologs from fly, Ciona, zebrafish, frog, chicken, and mouse were identified and deposited in the database. PWMs from Jaspar, hPDI, and UniPROBE databases were deposited in the database and can be retrieved using their corresponding TF names. Gene enhancer regions from various sources of ChIP-seq data were deposited into the database and were able to be visualized by graphical output. Besides biocuration, mouse homologs of the 81 core cardiac TFs were selected using a Naïve-Bayes approach and then by intersecting four independent data sources: RNA profiling, expert annotation, PubMed abstracts and phenotype. Discussion: The CardioTF database can be used as a portal to construct transcriptional network of cardiac development. Availability and Implementation: Database URL: http://www.cardiosignal.org/database/cardiotf.html. PMID:27635320
CardioTF, a database of deconstructing transcriptional circuits in the heart system.
Zhen, Yisong
2016-01-01
Information on cardiovascular gene transcription is fragmented and far behind the present requirements of the systems biology field. To create a comprehensive source of data for cardiovascular gene regulation and to facilitate a deeper understanding of genomic data, the CardioTF database was constructed. The purpose of this database is to collate information on cardiovascular transcription factors (TFs), position weight matrices (PWMs), and enhancer sequences discovered using the ChIP-seq method. The Naïve-Bayes algorithm was used to classify literature and identify all PubMed abstracts on cardiovascular development. The natural language learning tool GNAT was then used to identify corresponding gene names embedded within these abstracts. Local Perl scripts were used to integrate and dump data from public databases into the MariaDB management system (MySQL). In-house R scripts were written to analyze and visualize the results. Known cardiovascular TFs from humans and human homologs from fly, Ciona, zebrafish, frog, chicken, and mouse were identified and deposited in the database. PWMs from Jaspar, hPDI, and UniPROBE databases were deposited in the database and can be retrieved using their corresponding TF names. Gene enhancer regions from various sources of ChIP-seq data were deposited into the database and were able to be visualized by graphical output. Besides biocuration, mouse homologs of the 81 core cardiac TFs were selected using a Naïve-Bayes approach and then by intersecting four independent data sources: RNA profiling, expert annotation, PubMed abstracts and phenotype. The CardioTF database can be used as a portal to construct transcriptional network of cardiac development. Database URL: http://www.cardiosignal.org/database/cardiotf.html.
Tiwari, Saumya; Reddy, Vijaya B.; Bhargava, Rohit; Raman, Jaishankar
2015-01-01
Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR) spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients’ biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures. PMID:25932912
Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis
2014-07-01
Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.
A Visual Analytic for Improving Human Terrain Understanding
2013-06-01
Kim, S., Minotra, D., Strater, L ., Cuevas, and Colombo, D. “Knowledge Visualization to Enhance Human-Agent Situation Awareness within a Computational...1971). A General Coefficient of Similarity and Some of Its Properties Biometrics, Vol. 27, No. 4, pp. 857-871. [14] Coppock, S. & Mazlack, L ...and allow human interpretation. HDPT Component Overview PostgreSQL DBS Apache Tomcat Web Server [’...... _./ Globa l Graph Web ~ Application
NASA Technical Reports Server (NTRS)
Otto, C. A.; Norsk, P.; Shelhamer, M. J.; Davis, J. R.
2015-01-01
The Visual Impairment Intracranial Pressure (VIIP) syndrome is currently NASA's number one human space flight risk. The syndrome, which is related to microgravity exposure, manifests with changes in visual acuity (hyperopic shifts, scotomas), changes in eye structure (optic disc edema, choroidal folds, cotton wool spots, globe flattening, and distended optic nerve sheaths). In some cases, elevated cerebrospinal fluid pressure has been documented postflight reflecting increased intracranial pressure (ICP). While the eye appears to be the main affected end organ of this syndrome, the ocular affects are thought to be related to the effect of cephalad fluid shift on the vascular system and the central nervous system. The leading hypotheses for the development of VIIP involve microgravity induced head-ward fluid shifts along with a loss of gravity-assisted drainage of venous blood from the brain, both leading to cephalic congestion and increased ICP. Although not all crewmembers have manifested clinical signs or symptoms of the VIIP syndrome, it is assumed that all astronauts exposed to microgravity have some degree of ICP elevation in-flight. Prolonged elevations of ICP can cause long-term reduced visual acuity and loss of peripheral visual fields, and has been reported to cause mild cognitive impairment in the analog terrestrial population of Idiopathic Intracranial Hypertension (IIH). These potentially irreversible health consequences underscore the importance of identifying the factors that lead to this syndrome and mitigating them.
Two-color mixing for classifying agricultural products for safety and quality
NASA Astrophysics Data System (ADS)
Ding, Fujian; Chen, Yud-Ren; Chao, Kuanglin; Chan, Diane E.
2006-02-01
We show that the chromaticness of the visual signal that results from the two-color mixing achieved through an optically enhanced binocular device is directly related to the band ratio of light intensity at the two selected wavebands. A technique that implements the band-ratio criterion in a visual device by using two-color mixing is presented here. The device will allow inspectors to identify targets visually in accordance with a two-wavelength band ratio. It is a method of inspection by human vision assisted by an optical device, which offers greater flexibility and better cost savings than a multispectral machine vision system that implements the band-ratio criterion. With proper selection of the two narrow wavebands, discrimination by chromaticness that is directly related to the band ratio can work well. An example application of this technique for the inspection of carcasses chickens of afficted with various diseases is given. An optimal pair of wavelengths of 454 and 578 nm was selected to optimize differences in saturation and hue in CIE LUV color space among different types of target. Another example application, for the detection of chilling injury in cucumbers, is given, here the selected wavelength pair was 504 and 652 nm. The novel two-color mixing technique for visual inspection can be included in visual devices for various applications, ranging from target detection to food safety inspection.
Gage, Julia C; Rodriguez, Ana Cecilia; Schiffman, Mark; Adadevoh, Sydney; Larraondo, Manuel J Alvarez; Chumworathayi, Bandit; Lejarza, Sandra Vargas; Araya, Luis Villegas; Garcia, Francisco; Budihas, Scott R; Long, Rodney; Katki, Hormuzd A; Herrero, Rolando; Burk, Robert D; Jeronimo, Jose
2009-05-01
To estimate efficacy of a visual triage of human papillomavirus (HPV)-positive women to either immediate cryotherapy or referral if not treatable (eg, invasive cancer, large precancers). We evaluated visual triage in the HPV-positive women aged 25 to 55 years from the 10,000-woman Guanacaste Cohort Study (n = 552). Twelve Peruvian midwives and 5 international gynecologists assessed treatability by cryotherapy using digitized high-resolution cervical images taken at enrollment. The reference standard of treatability was determined by 2 lead gynecologists from the entire 7-year follow-up of the women. Women diagnosed with histologic cervical intraepithelial neoplasia grade 2 or worse or 5-year persistence of carcinogenic HPV infection were defined as needing treatment. Midwives and gynecologists judged 30.8% and 41.2% of women not treatable by cryotherapy, respectively (P < 0.01). Among 149 women needing treatment, midwives and gynecologists correctly identified 57.5% and 63.8% (P = 0.07 for difference) of 71 women judged not treatable by the lead gynecologists and 77.6% and 59.7% (P < 0.01 for difference) of 78 women judged treatable by cryotherapy. The proportion of women judged not treatable by a reviewer varied widely and ranged from 18.6% to 61.1%. Interrater agreement was poor with mean pairwise overall agreement of 71.4% and 66.3% and kappa's of 0.33 and 0.30 for midwives and gynecologists, respectively. In future "screen-and-treat" cervical cancer prevention programs using HPV testing and cryotherapy, practitioners will visually triage HPV-positive women. The suboptimal performance of visual triage suggests that screen-and-treat programs using cryotherapy might be insufficient for treating precancerous lesions. Improved, low-technology triage methods and/or improved safe and low-technology treatment options are needed.
A bibliometric and visual analysis of global geo-ontology research
NASA Astrophysics Data System (ADS)
Li, Lin; Liu, Yu; Zhu, Haihong; Ying, Shen; Luo, Qinyao; Luo, Heng; Kuai, Xi; Xia, Hui; Shen, Hang
2017-02-01
In this paper, the results of a bibliometric and visual analysis of geo-ontology research articles collected from the Web of Science (WOS) database between 1999 and 2014 are presented. The numbers of national institutions and published papers are visualized and a global research heat map is drawn, illustrating an overview of global geo-ontology research. In addition, we present a chord diagram of countries and perform a visual cluster analysis of a knowledge co-citation network of references, disclosing potential academic communities and identifying key points, main research areas, and future research trends. The International Journal of Geographical Information Science, Progress in Human Geography, and Computers & Geosciences are the most active journals. The USA makes the largest contributions to geo-ontology research by virtue of its highest numbers of independent and collaborative papers, and its dominance was also confirmed in the country chord diagram. The majority of institutions are in the USA, Western Europe, and Eastern Asia. Wuhan University, University of Munster, and the Chinese Academy of Sciences are notable geo-ontology institutions. Keywords such as "Semantic Web," "GIS," and "space" have attracted a great deal of attention. "Semantic granularity in ontology-driven geographic information systems, "Ontologies in support of activities in geographical space" and "A translation approach to portable ontology specifications" have the highest cited centrality. Geographical space, computer-human interaction, and ontology cognition are the three main research areas of geo-ontology. The semantic mismatch between the producers and users of ontology data as well as error propagation in interdisciplinary and cross-linguistic data reuse needs to be solved. In addition, the development of geo-ontology modeling primitives based on OWL (Web Ontology Language)and finding methods to automatically rework data in Semantic Web are needed. Furthermore, the topological relations between geographical entities still require further study.
Kim, K; Lee, S
2015-05-01
Diagnosis of skin conditions is dependent on the assessment of skin surface properties that are represented by more tactile properties such as stiffness, roughness, and friction than visual information. Due to this reason, adding tactile feedback to existing vision based diagnosis systems can help dermatologists diagnose skin diseases or disorders more accurately. The goal of our research was therefore to develop a tactile rendering system for skin examinations by dynamic touch. Our development consists of two stages: converting a single image to a 3D haptic surface and rendering the generated haptic surface in real-time. Converting to 3D surfaces from 2D single images was implemented with concerning human perception data collected by a psychophysical experiment that measured human visual and haptic sensibility to 3D skin surface changes. For the second stage, we utilized real skin biomechanical properties found by prior studies. Our tactile rendering system is a standalone system that can be used with any single cameras and haptic feedback devices. We evaluated the performance of our system by conducting an identification experiment with three different skin images with five subjects. The participants had to identify one of the three skin surfaces by using a haptic device (Falcon) only. No visual cue was provided for the experiment. The results indicate that our system provides sufficient performance to render discernable tactile rendering with different skin surfaces. Our system uses only a single skin image and automatically generates a 3D haptic surface based on human haptic perception. Realistic skin interactions can be provided in real-time for the purpose of skin diagnosis, simulations, or training. Our system can also be used for other applications like virtual reality and cosmetic applications. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Using a System Identification Approach to Investigate Subtask Control during Human Locomotion
Logan, David; Kiemel, Tim; Jeka, John J.
2017-01-01
Here we apply a control theoretic view of movement to the behavior of human locomotion with the goal of using perturbations to learn about subtask control. Controlling one's speed and maintaining upright posture are two critical subtasks, or underlying functions, of human locomotion. How the nervous system simultaneously controls these two subtasks was investigated in this study. Continuous visual and mechanical perturbations were applied concurrently to subjects (n = 20) as probes to investigate these two subtasks during treadmill walking. Novel application of harmonic transfer function (HTF) analysis to human motor behavior was used, and these HTFs were converted to the time-domain based representation of phase-dependent impulse response functions (ϕIRFs). These ϕIRFs were used to identify the mapping from perturbation inputs to kinematic and electromyographic (EMG) outputs throughout the phases of the gait cycle. Mechanical perturbations caused an initial, passive change in trunk orientation and, at some phases of stimulus presentation, a corrective trunk EMG and orientation response. Visual perturbations elicited a trunk EMG response prior to a trunk orientation response, which was subsequently followed by an anterior-posterior displacement response. This finding supports the notion that there is a temporal hierarchy of functional subtasks during locomotion in which the control of upper-body posture precedes other subtasks. Moreover, the novel analysis we apply has the potential to probe a broad range of rhythmic behaviors to better understand their neural control. PMID:28123365
Functional and structural mapping of human cerebral cortex: Solutions are in the surfaces
Van Essen, David C.; Drury, Heather A.; Joshi, Sarang; Miller, Michael I.
1998-01-01
The human cerebral cortex is notorious for the depth and irregularity of its convolutions and for its variability from one individual to the next. These complexities of cortical geography have been a chronic impediment to studies of functional specialization in the cortex. In this report, we discuss ways to compensate for the convolutions by using a combination of strategies whose common denominator involves explicit reconstructions of the cortical surface. Surface-based visualization involves reconstructing cortical surfaces and displaying them, along with associated experimental data, in various complementary formats (including three-dimensional native configurations, two-dimensional slices, extensively smoothed surfaces, ellipsoidal representations, and cortical flat maps). Generating these representations for the cortex of the Visible Man leads to a surface-based atlas that has important advantages over conventional stereotaxic atlases as a substrate for displaying and analyzing large amounts of experimental data. We illustrate this by showing the relationship between functionally specialized regions and topographically organized areas in human visual cortex. Surface-based warping allows data to be mapped from individual hemispheres to a surface-based atlas while respecting surface topology, improving registration of identifiable landmarks, and minimizing unwanted distortions. Surface-based warping also can aid in comparisons between species, which we illustrate by warping a macaque flat map to match the shape of a human flat map. Collectively, these approaches will allow more refined analyses of commonalities as well as individual differences in the functional organization of primate cerebral cortex. PMID:9448242
Functional and structural mapping of human cerebral cortex: solutions are in the surfaces
NASA Technical Reports Server (NTRS)
Van Essen, D. C.; Drury, H. A.; Joshi, S.; Miller, M. I.
1998-01-01
The human cerebral cortex is notorious for the depth and irregularity of its convolutions and for its variability from one individual to the next. These complexities of cortical geography have been a chronic impediment to studies of functional specialization in the cortex. In this report, we discuss ways to compensate for the convolutions by using a combination of strategies whose common denominator involves explicit reconstructions of the cortical surface. Surface-based visualization involves reconstructing cortical surfaces and displaying them, along with associated experimental data, in various complementary formats (including three-dimensional native configurations, two-dimensional slices, extensively smoothed surfaces, ellipsoidal representations, and cortical flat maps). Generating these representations for the cortex of the Visible Man leads to a surface-based atlas that has important advantages over conventional stereotaxic atlases as a substrate for displaying and analyzing large amounts of experimental data. We illustrate this by showing the relationship between functionally specialized regions and topographically organized areas in human visual cortex. Surface-based warping allows data to be mapped from individual hemispheres to a surface-based atlas while respecting surface topology, improving registration of identifiable landmarks, and minimizing unwanted distortions. Surface-based warping also can aid in comparisons between species, which we illustrate by warping a macaque flat map to match the shape of a human flat map. Collectively, these approaches will allow more refined analyses of commonalities as well as individual differences in the functional organization of primate cerebral cortex.
Splitting Attention across the Two Visual Fields in Visual Short-Term Memory
ERIC Educational Resources Information Center
Delvenne, Jean-Francois; Holt, Jessica L.
2012-01-01
Humans have the ability to attentionally select the most relevant visual information from their extrapersonal world and to retain it in a temporary buffer, known as visual short-term memory (VSTM). Research suggests that at least two non-contiguous items can be selected simultaneously when they are distributed across the two visual hemifields. In…
Visual Attention and Applications in Multimedia Technologies
Le Callet, Patrick; Niebur, Ernst
2013-01-01
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications. PMID:24489403
Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada
2013-01-01
Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031
Yee, Susan H; Bradley, Patricia; Fisher, William S; Perreault, Sally D; Quackenboss, James; Johnson, Eric D; Bousquin, Justin; Murphy, Patricia A
2012-12-01
The U.S. Environmental Protection Agency has recently realigned its research enterprise around the concept of sustainability. Scientists from across multiple disciplines have a role to play in contributing the information, methods, and tools needed to more fully understand the long-term impacts of decisions on the social and economic sustainability of communities. Success will depend on a shift in thinking to integrate, organize, and prioritize research within a systems context. We used the Driving forces-Pressures-State-Impact-Response (DPSIR) framework as a basis for integrating social, cultural, and economic aspects of environmental and human health into a single framework. To make the framework broadly applicable to sustainability research planning, we provide a hierarchical system of DPSIR keywords and guidelines for use as a communication tool. The applicability of the integrated framework was first tested on a public health issue (asthma disparities) for purposes of discussion. We then applied the framework at a science planning meeting to identify opportunities for sustainable and healthy communities research. We conclude that an integrated systems framework has many potential roles in science planning, including identifying key issues, visualizing interactions within the system, identifying research gaps, organizing information, developing computational models, and identifying indicators.
Multilevel depth and image fusion for human activity detection.
Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng
2013-10-01
Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.
Visual Processing of Object Velocity and Acceleration
1994-02-04
A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364
The Visual System of Zebrafish and its Use to Model Human Ocular Diseases
Gestri, Gaia; Link, Brian A; Neuhauss, Stephan CF
2011-01-01
Free swimming zebrafish larvae depend mainly on their sense of vision to evade predation and to catch prey. Hence there is strong selective pressure on the fast maturation of visual function and indeed the visual system already supports a number of visually-driven behaviors in the newly hatched larvae. The ability to exploit the genetic and embryonic accessibility of the zebrafish in combination with a behavioral assessment of visual system function has made the zebrafish a popular model to study vision and its diseases. Here, we review the anatomy, physiology and development of the zebrafish eye as the basis to relate the contributions of the zebrafish to our understanding of human ocular diseases. PMID:21595048
Complete scanpaths analysis toolbox.
Augustyniak, Piotr; Mikrut, Zbigniew
2006-01-01
This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.
Human Factors Evaluation of Advanced Electric Power Grid Visualization Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.; Dauenhauer, Peter M.; Wierks, Tamara G.
This report describes initial human factors evaluation of four visualization tools (Graphical Contingency Analysis, Force Directed Graphs, Phasor State Estimator and Mode Meter/ Mode Shapes) developed by PNNL, and proposed test plans that may be implemented to evaluate their utility in scenario-based experiments.
The biodigital human: a web-based 3D platform for medical visualization and education.
Qualter, John; Sculli, Frank; Oliker, Aaron; Napier, Zachary; Lee, Sabrina; Garcia, Julio; Frenkel, Sally; Harnik, Victoria; Triola, Marc
2012-01-01
NYU School of Medicine's Division of Educational Informatics in collaboration with BioDigital Systems LLC (New York, NY) has created a virtual human body dataset that is being used for visualization, education and training and is accessible over modern web browsers.
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
Eye Contact Is Crucial for Referential Communication in Pet Dogs.
Savalli, Carine; Resende, Briseida; Gaunet, Florence
2016-01-01
Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.
Fahmy, Gamal; Black, John; Panchanathan, Sethuraman
2006-06-01
Today's multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are: 1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.
The four-meter confrontation visual field test.
Kodsi, S R; Younge, B R
1992-01-01
The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests. PMID:1494829
The four-meter confrontation visual field test.
Kodsi, S R; Younge, B R
1992-01-01
The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests.
Toward statistical modeling of saccadic eye-movement and visual saliency.
Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming
2014-11-01
In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.
Bioelectronic nose and its application to smell visualization.
Ko, Hwi Jin; Park, Tai Hyun
2016-01-01
There have been many trials to visualize smell using various techniques in order to objectively express the smell because information obtained from the sense of smell in human is very subjective. So far, well-trained experts such as a perfumer, complex and large-scale equipment such as GC-MS, and an electronic nose have played major roles in objectively detecting and recognizing odors. Recently, an optoelectronic nose was developed to achieve this purpose, but some limitations regarding the sensitivity and the number of smells that can be visualized still persist. Since the elucidation of the olfactory mechanism, numerous researches have been accomplished for the development of a sensing device by mimicking human olfactory system. Engineered olfactory cells were constructed to mimic the human olfactory system, and the use of engineered olfactory cells for smell visualization has been attempted with the use of various methods such as calcium imaging, CRE reporter assay, BRET, and membrane potential assay; however, it is not easy to consistently control the condition of cells and it is impossible to detect low odorant concentration. Recently, the bioelectronic nose was developed, and much improved along with the improvement of nano-biotechnology. The bioelectronic nose consists of the following two parts: primary transducer and secondary transducer. Biological materials as a primary transducer improved the selectivity of the sensor, and nanomaterials as a secondary transducer increased the sensitivity. Especially, the bioelectronic noses using various nanomaterials combined with human olfactory receptors or nanovesicles derived from engineered olfactory cells have a potential which can detect almost all of the smells recognized by human because an engineered olfactory cell might be able to express any human olfactory receptor as well as can mimic human olfactory system. Therefore, bioelectronic nose will be a potent tool for smell visualization, but only if two technologies are completed. First, a multi-channel array-sensing system has to be applied for the integration of all of the olfactory receptors into a single chip for mimicking the performance of human nose. Second, the processing technique of the multi-channel system signals should be simultaneously established with the conversion of the signals to visual images. With the use of this latest sensing technology, the realization of a proper smell-visualization technology is expected in the near future.
Object form discontinuity facilitates displacement discrimination across saccades.
Demeyer, Maarten; De Graef, Peter; Wagemans, Johan; Verfaillie, Karl
2010-06-01
Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual system's efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.
Crowding by Invisible Flankers
Ho, Cristy; Cheung, Sing-Hang
2011-01-01
Background Human object recognition degrades sharply as the target object moves from central vision into peripheral vision. In particular, one's ability to recognize a peripheral target is severely impaired by the presence of flanking objects, a phenomenon known as visual crowding. Recent studies on how visual awareness of flanker existence influences crowding had shown mixed results. More importantly, it is not known whether conscious awareness of the existence of both the target and flankers are necessary for crowding to occur. Methodology/Principal Findings Here we show that crowding persists even when people are completely unaware of the flankers, which are rendered invisible through the continuous flash suppression technique. Contrast threshold for identifying the orientation of a grating pattern was elevated in the flanked condition, even when the subjects reported that they were unaware of the perceptually suppressed flankers. Moreover, we find that orientation-specific adaptation is attenuated by flankers even when both the target and flankers are invisible. Conclusions These findings complement the suggested correlation between crowding and visual awareness. What's more, our results demonstrate that conscious awareness and attention are not prerequisite for crowding. PMID:22194919
The effect of early visual deprivation on the neural bases of multisensory processing.
Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte
2015-06-01
Developmental vision is deemed to be necessary for the maturation of multisensory cortical circuits. Thus far, this has only been investigated in animal studies, which have shown that congenital visual deprivation markedly reduces the capability of neurons to integrate cross-modal inputs. The present study investigated the effect of transient congenital visual deprivation on the neural mechanisms of multisensory processing in humans. We used functional magnetic resonance imaging to compare responses of visual and auditory cortical areas to visual, auditory and audio-visual stimulation in cataract-reversal patients and normally sighted controls. The results showed that cataract-reversal patients, unlike normally sighted controls, did not exhibit multisensory integration in auditory areas. Furthermore, cataract-reversal patients, but not normally sighted controls, exhibited lower visual cortical processing within visual cortex during audio-visual stimulation than during visual stimulation. These results indicate that congenital visual deprivation affects the capability of cortical areas to integrate cross-modal inputs in humans, possibly because visual processing is suppressed during cross-modal stimulation. Arguably, the lack of vision in the first months after birth may result in a reorganization of visual cortex, including the suppression of noisy visual input from the deprived retina in order to reduce interference during auditory processing. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.
Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu
2017-01-01
Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.