NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.
Learning abstract visual concepts via probabilistic program induction in a Language of Thought.
Overlan, Matthew C; Jacobs, Robert A; Piantadosi, Steven T
2017-11-01
The ability to learn abstract concepts is a powerful component of human cognition. It has been argued that variable binding is the key element enabling this ability, but the computational aspects of variable binding remain poorly understood. Here, we address this shortcoming by formalizing the Hierarchical Language of Thought (HLOT) model of rule learning. Given a set of data items, the model uses Bayesian inference to infer a probability distribution over stochastic programs that implement variable binding. Because the model makes use of symbolic variables as well as Bayesian inference and programs with stochastic primitives, it combines many of the advantages of both symbolic and statistical approaches to cognitive modeling. To evaluate the model, we conducted an experiment in which human subjects viewed training items and then judged which test items belong to the same concept as the training items. We found that the HLOT model provides a close match to human generalization patterns, significantly outperforming two variants of the Generalized Context Model, one variant based on string similarity and the other based on visual similarity using features from a deep convolutional neural network. Additional results suggest that variable binding happens automatically, implying that binding operations do not add complexity to peoples' hypothesized rules. Overall, this work demonstrates that a cognitive model combining symbolic variables with Bayesian inference and stochastic program primitives provides a new perspective for understanding people's patterns of generalization. Copyright © 2017 Elsevier B.V. All rights reserved.
Action starring narratives and events: Structure and inference in visual narrative comprehension
Cohn, Neil; Wittenberg, Eva
2015-01-01
Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped “flashes” commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These “action star” panels depict a narrative culmination (a “Peak”), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence. PMID:26709362
Action starring narratives and events: Structure and inference in visual narrative comprehension.
Cohn, Neil; Wittenberg, Eva
Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped "flashes" commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These "action star" panels depict a narrative culmination (a "Peak"), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence.
Modeling the Perception of Audiovisual Distance: Bayesian Causal Inference and Other Models
2016-01-01
Studies of audiovisual perception of distance are rare. Here, visual and auditory cue interactions in distance are tested against several multisensory models, including a modified causal inference model. In this causal inference model predictions of estimate distributions are included. In our study, the audiovisual perception of distance was overall better explained by Bayesian causal inference than by other traditional models, such as sensory dominance and mandatory integration, and no interaction. Causal inference resolved with probability matching yielded the best fit to the data. Finally, we propose that sensory weights can also be estimated from causal inference. The analysis of the sensory weights allows us to obtain windows within which there is an interaction between the audiovisual stimuli. We find that the visual stimulus always contributes by more than 80% to the perception of visual distance. The visual stimulus also contributes by more than 50% to the perception of auditory distance, but only within a mobile window of interaction, which ranges from 1 to 4 m. PMID:27959919
Vlaic, Sebastian; Hoffmann, Bianca; Kupfer, Peter; Weber, Michael; Dräger, Andreas
2013-09-01
GRN2SBML automatically encodes gene regulatory networks derived from several inference tools in systems biology markup language. Providing a graphical user interface, the networks can be annotated via the simple object access protocol (SOAP)-based application programming interface of BioMart Central Portal and minimum information required in the annotation of models registry. Additionally, we provide an R-package, which processes the output of supported inference algorithms and automatically passes all required parameters to GRN2SBML. Therefore, GRN2SBML closes a gap in the processing pipeline between the inference of gene regulatory networks and their subsequent analysis, visualization and storage. GRN2SBML is freely available under the GNU Public License version 3 and can be downloaded from http://www.hki-jena.de/index.php/0/2/490. General information on GRN2SBML, examples and tutorials are available at the tool's web page.
Gagnier, Kristin Michod; Shipley, Thomas F
2016-01-01
Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.
Inferential reasoning by exclusion in great apes, lesser apes, and spider monkeys.
Hill, Andrew; Collier-Baker, Emma; Suddendorf, Thomas
2011-02-01
Using the cups task, in which subjects are presented with limited visual or auditory information that can be used to deduce the location of a hidden reward, Call (2004) found prima facie evidence of inferential reasoning by exclusion in several great ape species. One bonobo (Pan paniscus) and two gorillas (Gorilla gorilla) appeared to make such inferences in both the visual and auditory domains. However, common chimpanzees (Pan troglodytes) were successful only in the visual domain, and Bornean orangutans (Pongo pygmaeus) in neither. The present research built on this paradigm, and Experiment 1 yielded prima facie evidence of inference by exclusion in both domains for two common chimpanzees, and in the visual domain for two Sumatran orangutans (Pongo abelii). Experiments 2 and 3 demonstrated that two specific associative learning explanations could not readily account for these results. Because an important focus of the program of research was to assess the cognitive capacities of lesser apes (family Hylobatidae), we modified Call's original procedures to better suit their attentional and dispositional characteristics. In Experiment 1, testing was also attempted with three gibbon genera (Symphalangus, Nomascus, Hylobates), but none of the subjects completed the standard task. Further testing of three siamangs (Symphalangus syndactylus) and a spider monkey (Ateles geoffroyi) using a faster method yielded prima facie evidence of inferential reasoning by exclusion in the visual domain among the siamangs (Experiment 4).
Hurley, Daniel; Araki, Hiromitsu; Tamada, Yoshinori; Dunmore, Ben; Sanders, Deborah; Humphreys, Sally; Affara, Muna; Imoto, Seiya; Yasuda, Kaori; Tomiyasu, Yuki; Tashiro, Kosuke; Savoie, Christopher; Cho, Vicky; Smith, Stephen; Kuhara, Satoru; Miyano, Satoru; Charnock-Jones, D. Stephen; Crampin, Edmund J.; Print, Cristin G.
2012-01-01
Gene regulatory networks inferred from RNA abundance data have generated significant interest, but despite this, gene network approaches are used infrequently and often require input from bioinformaticians. We have assembled a suite of tools for analysing regulatory networks, and we illustrate their use with microarray datasets generated in human endothelial cells. We infer a range of regulatory networks, and based on this analysis discuss the strengths and limitations of network inference from RNA abundance data. We welcome contact from researchers interested in using our inference and visualization tools to answer biological questions. PMID:22121215
Magnotti, John F; Beauchamp, Michael S
2017-02-01
Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.
Inferential functioning in visually impaired children.
Puche-Navarro, Rebeca; Millán, Rafael
2007-01-01
The current study explores the inferential abilities of visually impaired children in a task presented in two formats, manipulative and verbal. The results showed that in the group of visually impaired children, just as with children with normal sight, there was a wide range of inference types. It was found that the visually impaired children perform slightly better in the use of inductive and relational inferences in the verbal format, while in the manipulative format children with normal sight perform better. These results suggest that in inferential functioning of young children, and especially visually impaired children, the format of the task influences performance more than the child's visual ability.
Garcia-Retamero, Rocio; Hoffrage, Ulrich
2013-04-01
Doctors and patients have difficulty inferring the predictive value of a medical test from information about the prevalence of a disease and the sensitivity and false-positive rate of the test. Previous research has established that communicating such information in a format the human mind is adapted to-namely natural frequencies-as compared to probabilities, boosts accuracy of diagnostic inferences. In a study, we investigated to what extent these inferences can be improved-beyond the effect of natural frequencies-by providing visual aids. Participants were 81 doctors and 81 patients who made diagnostic inferences about three medical tests on the basis of information about prevalence of a disease, and the sensitivity and false-positive rate of the tests. Half of the participants received the information in natural frequencies, while the other half received the information in probabilities. Half of the participants only received numerical information, while the other half additionally received a visual aid representing the numerical information. In addition, participants completed a numeracy scale. Our study showed three important findings: (1) doctors and patients made more accurate inferences when information was communicated in natural frequencies as compared to probabilities; (2) visual aids boosted accuracy even when the information was provided in natural frequencies; and (3) doctors were more accurate in their diagnostic inferences than patients, though differences in accuracy disappeared when differences in numerical skills were controlled for. Our findings have important implications for medical practice as they suggest suitable ways to communicate quantitative medical data. Copyright © 2013 Elsevier Ltd. All rights reserved.
Visual shape perception as Bayesian inference of 3D object-centered shape representations.
Erdogan, Goker; Jacobs, Robert A
2017-11-01
Despite decades of research, little is known about how people visually perceive object shape. We hypothesize that a promising approach to shape perception is provided by a "visual perception as Bayesian inference" framework which augments an emphasis on visual representation with an emphasis on the idea that shape perception is a form of statistical inference. Our hypothesis claims that shape perception of unfamiliar objects can be characterized as statistical inference of 3D shape in an object-centered coordinate system. We describe a computational model based on our theoretical framework, and provide evidence for the model along two lines. First, we show that, counterintuitively, the model accounts for viewpoint-dependency of object recognition, traditionally regarded as evidence against people's use of 3D object-centered shape representations. Second, we report the results of an experiment using a shape similarity task, and present an extensive evaluation of existing models' abilities to account for the experimental data. We find that our shape inference model captures subjects' behaviors better than competing models. Taken as a whole, our experimental and computational results illustrate the promise of our approach and suggest that people's shape representations of unfamiliar objects are probabilistic, 3D, and object-centered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Virtue, Sandra; Schutzenhofer, Michael; Tomkins, Blaine
2017-07-01
Although a left hemisphere advantage is usually evident during language processing, the right hemisphere is highly involved during the processing of weakly constrained inferences. However, currently little is known about how the emotional valence of environmental stimuli influences the hemispheric processing of these inferences. In the current study, participants read texts promoting either strongly or weakly constrained predictive inferences and performed a lexical decision task to inference-related targets presented to the left visual field-right hemisphere or the right visual field-left hemisphere. While reading these texts, participants either listened to dissonant music (i.e., the music condition) or did not listen to music (i.e., the no music condition). In the no music condition, the left hemisphere showed an advantage for strongly constrained inferences compared to weakly constrained inferences, whereas the right hemisphere showed high facilitation for both strongly and weakly constrained inferences. In the music condition, both hemispheres showed greater facilitation for strongly constrained inferences than for weakly constrained inferences. These results suggest that negatively valenced stimuli (such as dissonant music) selectively influences the right hemisphere's processing of weakly constrained inferences during reading.
Map LineUps: Effects of spatial structure on graphical inference.
Beecham, Roger; Dykes, Jason; Meulemans, Wouter; Slingsby, Aidan; Turkay, Cagatay; Wood, Jo
2017-01-01
Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.'s 'Visual Line-up' method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n=361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran's I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people's abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.
A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreland, Kenneth; Geveci, Berk
2014-11-01
The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less
Inferring the direction of implied motion depends on visual awareness
Faivre, Nathan; Koch, Christof
2014-01-01
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951
Inferring the direction of implied motion depends on visual awareness.
Faivre, Nathan; Koch, Christof
2014-04-04
Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.
ERIC Educational Resources Information Center
Hegarty, Mary; Canham, Matt S.; Fabrikant, Sara I.
2010-01-01
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of…
ERIC Educational Resources Information Center
Edmonds, Caroline J.; Pring, Linda
2006-01-01
The two experiments reported here investigated the ability of sighted children and children with visual impairment to comprehend text and, in particular, to draw inferences both while reading and while listening. Children were assigned into "comprehension skill" groups, depending on the degree to which their reading comprehension skill was in line…
The visual system’s internal model of the world
Lee, Tai Sing
2015-01-01
The Bayesian paradigm has provided a useful conceptual theory for understanding perceptual computation in the brain. While the detailed neural mechanisms of Bayesian inference are not fully understood, recent computational and neurophysiological works have illuminated the underlying computational principles and representational architecture. The fundamental insights are that the visual system is organized as a modular hierarchy to encode an internal model of the world, and that perception is realized by statistical inference based on such internal model. In this paper, I will discuss and analyze the varieties of representational schemes of these internal models and how they might be used to perform learning and inference. I will argue for a unified theoretical framework for relating the internal models to the observed neural phenomena and mechanisms in the visual cortex. PMID:26566294
Visual recognition and inference using dynamic overcomplete sparse learning.
Murray, Joseph F; Kreutz-Delgado, Kenneth
2007-09-01
We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.
Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Dean N.; Silva, Claudio
2013-09-30
For the past three years, a large analysis and visualization effort—funded by the Department of Energy’s Office of Biological and Environmental Research (BER), the National Aeronautics and Space Administration (NASA), and the National Oceanic and Atmospheric Administration (NOAA)—has brought together a wide variety of industry-standard scientific computing libraries and applications to create Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) to serve the global climate simulation and observational research communities. To support interactive analysis and visualization, all components connect through a provenance application–programming interface to capture meaningful history and workflow. Components can be loosely coupled into the framework for fast integrationmore » or tightly coupled for greater system functionality and communication with other components. The overarching goal of UV-CDAT is to provide a new paradigm for access to and analysis of massive, distributed scientific data collections by leveraging distributed data architectures located throughout the world. The UV-CDAT framework addresses challenges in analysis and visualization and incorporates new opportunities, including parallelism for better efficiency, higher speed, and more accurate scientific inferences. Today, it provides more than 600 users access to more analysis and visualization products than any other single source.« less
Muscatello, Christopher M.; Domier, Calvin W.; Hu, Xing; ...
2014-08-13
Here, quasi-optical imaging at sub-THz frequencies has had a major impact on fusion plasma diagnostics. Mm-wave imaging reflectometry utilizes microwaves to actively probe fusion plasmas, inferring the local properties of electron density fluctuations. Electron cyclotron emission imaging is a multichannel radiometer that passively measures the spontaneous emission of microwaves from the plasma to infer local properties of electron temperature fluctuations. These imaging diagnostics work together to diagnose the characteristics of turbulence. Important quantities such as amplitude and wavenumber of coherent fluctuations, correlation lengths and decor relation times of turbulence, and poloidal flow velocity of the plasma are readily inferred.
Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul
2009-01-01
Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.
Causal Inference for Spatial Constancy across Saccades
Atsma, Jeroen; Maij, Femke; Koppen, Mathieu; Irwin, David E.; Medendorp, W. Pieter
2016-01-01
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability. PMID:26967730
2016-06-01
theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images
ERIC Educational Resources Information Center
Ford, Janet A.; Milosky, Linda M.
2003-01-01
Kindergarten children with language impairment (LI) and age-matched controls were asked to label facial expressions depicting various emotions and then to infer emotional reactions from stories presented either verbally, visually, or combined. Results suggest that inference errors made by children with LI during early stages of social processing…
CAD system for automatic analysis of CT perfusion maps
NASA Astrophysics Data System (ADS)
Hachaj, T.; Ogiela, M. R.
2011-03-01
In this article, authors present novel algorithms developed for the computer-assisted diagnosis (CAD) system for analysis of dynamic brain perfusion, computer tomography (CT) maps, cerebral blood flow (CBF), and cerebral blood volume (CBV). Those methods perform both quantitative analysis [detection and measurement and description with brain anatomy atlas (AA) of potential asymmetries/lesions] and qualitative analysis (semantic interpretation of visualized symptoms). The semantic interpretation (decision about type of lesion: ischemic/hemorrhagic, is the brain tissue at risk of infraction or not) of visualized symptoms is done by, so-called, cognitive inference processes allowing for reasoning on character of pathological regions based on specialist image knowledge. The whole system is implemented in.NET platform (C# programming language) and can be used on any standard PC computer with.NET framework installed.
Cohn, Neil; Kutas, Marta
2015-01-01
Inference has long been emphasized in the comprehension of verbal and visual narratives. Here, we measured event-related brain potentials to visual sequences designed to elicit inferential processing. In Impoverished sequences, an expressionless “onlooker” watches an undepicted event (e.g., person throws a ball for a dog, then watches the dog chase it) just prior to a surprising finale (e.g., someone else returns the ball), which should lead to an inference (i.e., the different person retrieved the ball). Implied sequences alter this narrative structure by adding visual cues to the critical panel such as a surprised facial expression to the onlooker implying they saw an unexpected, albeit undepicted, event. In contrast, Expected sequences show a predictable, but then confounded, event (i.e., dog retrieves ball, then different person returns it), and Explicit sequences depict the unexpected event (i.e., different person retrieves then returns ball). At the critical penultimate panel, sequences representing depicted events (Explicit, Expected) elicited a larger posterior positivity (P600) than the relatively passive events of an onlooker (Impoverished, Implied), though Implied sequences were slightly more positive than Impoverished sequences. At the subsequent and final panel, a posterior positivity (P600) was greater to images in Impoverished sequences than those in Explicit and Implied sequences, which did not differ. In addition, both sequence types requiring inference (Implied, Impoverished) elicited a larger frontal negativity than those explicitly depicting events (Expected, Explicit). These results show that neural processing differs for visual narratives omitting events versus those depicting events, and that the presence of subtle visual cues can modulate such effects presumably by altering narrative structure. PMID:26320706
HaploForge: a comprehensive pedigree drawing and haplotype visualization web application.
Tekman, Mehmet; Medlar, Alan; Mozere, Monika; Kleta, Robert; Stanescu, Horia
2017-12-15
Haplotype reconstruction is an important tool for understanding the aetiology of human disease. Haplotyping infers the most likely phase of observed genotypes conditional on constraints imposed by the genotypes of other pedigree members. The results of haplotype reconstruction, when visualized appropriately, show which alleles are identical by descent despite the presence of untyped individuals. When used in concert with linkage analysis, haplotyping can help delineate a locus of interest and provide a succinct explanation for the transmission of the trait locus. Unfortunately, the design choices made by existing haplotype visualization programs do not scale to large numbers of markers. Indeed, following haplotypes from generation to generation requires excessive scrolling back and forth. In addition, the most widely used program for haplotype visualization produces inconsistent recombination artefacts for the X chromosome. To resolve these issues, we developed HaploForge, a novel web application for haplotype visualization and pedigree drawing. HaploForge takes advantage of HTML5 to be fast, portable and avoid the need for local installation. It can accurately visualize autosomal and X-linked haplotypes from both outbred and consanguineous pedigrees. Haplotypes are coloured based on identity by descent using a novel A* search algorithm and we provide a flexible viewing mode to aid visual inspection. HaploForge can currently process haplotype reconstruction output from Allegro, GeneHunter, Merlin and Simwalk. HaploForge is licensed under GPLv3 and is hosted and maintained via GitHub. https://github.com/mtekman/haploforge. r.kleta@ucl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.
Coco, Moreno I; Keller, Frank; Malcolm, George L
2016-11-01
The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.
Ancestral gene reconstruction and synthesis of ancient rhodopsins in the laboratory.
Chang, Belinda S W
2003-08-01
Laboratory synthesis of ancestral proteins offers an intriguing opportunity to study the past directly. The development of Bayesian methods to infer ancestral sequences, combined with advances in models of molecular evolution, and synthetic gene technology make this an increasingly promising approach in evolutionary studies of molecular function. Visual pigments form the first step in the biochemical cascade of events in the retina in all animals known to possess visual capabilities. In vertebrates, the necessity of spanning a dynamic range of light intensities of many orders of magnitude has given rise to two different types of photoreceptors, rods specialized for dim-light conditions, and cones for daylight and color vision. These photoreceptors contain different types of visual pigment genes. Reviewed here are methods of inferring ancestral sequences, chemical synthesis of artificial ancestral genes in the laboratory, and applications to the evolution of vertebrate visual systems and the experimental recreation of an archosaur rod visual pigment. The ancestral archosaurs gave rise to several notable lineages of diapsid reptiles, including the birds and the dinosaurs, and would have existed over 200 MYA. What little is known of their physiology comes from fossil remains, and inference based on the biology of their living descendants. Despite its age, an ancestral archosaur pigment was successfully recreated in the lab, and showed interesting properties of its wavelength sensitivity that may have implications for the visual capabilities of the ancestral archosaurs in dim light.
Blom, Mozes P K
2015-08-05
Recently developed molecular methods enable geneticists to target and sequence thousands of orthologous loci and infer evolutionary relationships across the tree of life. Large numbers of genetic markers benefit species tree inference but visual inspection of alignment quality, as traditionally conducted, is challenging with thousands of loci. Furthermore, due to the impracticality of repeated visual inspection with alternative filtering criteria, the potential consequences of using datasets with different degrees of missing data remain nominally explored in most empirical phylogenomic studies. In this short communication, I describe a flexible high-throughput pipeline designed to assess alignment quality and filter exonic sequence data for subsequent inference. The stringency criteria for alignment quality and missing data can be adapted based on the expected level of sequence divergence. Each alignment is automatically evaluated based on the stringency criteria specified, significantly reducing the number of alignments that require visual inspection. By developing a rapid method for alignment filtering and quality assessment, the consistency of phylogenetic estimation based on exonic sequence alignments can be further explored across distinct inference methods, while accounting for different degrees of missing data.
Perceptual learning as improved probabilistic inference in early sensory areas.
Bejjanki, Vikranth R; Beck, Jeffrey M; Lu, Zhong-Lin; Pouget, Alexandre
2011-05-01
Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.
Neural Correlates of Bridging Inferences and Coherence Processing
ERIC Educational Resources Information Center
Kim, Sung-il; Yoon, Misun; Kim, Wonsik; Lee, Sunyoung; Kang, Eunjoo
2012-01-01
We explored the neural correlates of bridging inferences and coherence processing during story comprehension using Positron Emission Tomography (PET). Ten healthy right-handed volunteers were visually presented three types of stories (Strong Coherence, Weak Coherence, and Control) consisted of three sentences. The causal connectedness among…
Process Mining for Individualized Behavior Modeling Using Wireless Tracking in Nursing Homes
Fernández-Llatas, Carlos; Benedi, José-Miguel; García-Gómez, Juan M.; Traver, Vicente
2013-01-01
The analysis of human behavior patterns is increasingly used for several research fields. The individualized modeling of behavior using classical techniques requires too much time and resources to be effective. A possible solution would be the use of pattern recognition techniques to automatically infer models to allow experts to understand individual behavior. However, traditional pattern recognition algorithms infer models that are not readily understood by human experts. This limits the capacity to benefit from the inferred models. Process mining technologies can infer models as workflows, specifically designed to be understood by experts, enabling them to detect specific behavior patterns in users. In this paper, the eMotiva process mining algorithms are presented. These algorithms filter, infer and visualize workflows. The workflows are inferred from the samples produced by an indoor location system that stores the location of a resident in a nursing home. The visualization tool is able to compare and highlight behavior patterns in order to facilitate expert understanding of human behavior. This tool was tested with nine real users that were monitored for a 25-week period. The results achieved suggest that the behavior of users is continuously evolving and changing and that this change can be measured, allowing for behavioral change detection. PMID:24225907
IMGui-A Desktop GUI Application for Isolation with Migration Analyses.
Knoblauch, Jared; Sethuraman, Arun; Hey, Jody
2017-02-01
The Isolation with Migration (IM) programs (e.g., IMa2) have been utilized extensively by evolutionary biologists for model-based inference of demographic parameters including effective population sizes, migration rates, and divergence times. Here, we describe a graphical user interface for the latest IM program. IMGui provides a comprehensive set of tools for performing demographic analyses, tracking progress of runs, and visualizing results. Developed using node. js and the Electron framework, IMGui is an application that runs on any desktop operating system, and is available for download at https://github.com/jaredgk/IMgui-electron-packages. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
GWVis: A Tool for Comparative Ground-Water Data Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Best, Daniel M.; Lewis, Robert R.
2010-11-01
The Ground-Water Visualization application (GWVis) presents ground-water data visually in order to educate the public on ground-water issues. It is also intended for presentations to government and other funding agencies. Current three dimensional models of ground-water are overly complex, while the two dimensional representations (i.e., on paper) are neither comprehensive, nor engaging. At present, GWVis operates on water head elevation data over a given time span, together with a matching (fixed) underlying geography. Two elevation scenarios are compared with each other, typically a control data set (actual field data) and a simulation. Scenario comparison can be animated for the timemore » span provided. We developed GWVis using the Python programming language, associated libraries, and pyOpenGL extension packages to improve performance and control of attributes of the mode (such as color, positioning, scale, and interpolation). GWVis bridges the gap between two dimensional and dynamic three dimensional research visualizations by providing an intuitive, interactive design that allows participants to view the model from different perspectives and to infer information about scenarios. By incorporating scientific data in an environment that can be easily understood, GWVis allows the information to be presented to a large audience base.« less
High-resolution eye tracking using V1 neuron activity
McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.
2014-01-01
Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783
Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.
Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús
2008-10-01
Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.
ERIC Educational Resources Information Center
Vlacholia, Maria; Vosniadou, Stella; Roussos, Petros; Salta, Katerina; Kazi, Smaragda; Sigalas, Michael; Tzougraki, Chryssa
2017-01-01
We present two studies that investigated the adoption of visual/spatial and analytic strategies by individuals at different levels of expertise in the area of organic chemistry, using the Visual Analytic Chemistry Task (VACT). The VACT allows the direct detection of analytic strategy use without drawing inferences about underlying mental…
ERIC Educational Resources Information Center
Keehner, Madeleine; Hegarty, Mary; Cohen, Cheryl; Khooshabeh, Peter; Montello, Daniel R.
2008-01-01
Three experiments examined the effects of interactive visualizations and spatial abilities on a task requiring participants to infer and draw cross sections of a three-dimensional (3D) object. The experiments manipulated whether participants could interactively control a virtual 3D visualization of the object while performing the task, and…
Malle, Bertram F; Holbrook, Jess
2012-04-01
People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.
A Simple Model-Based Approach to Inferring and Visualizing Cancer Mutation Signatures
Shiraishi, Yuichi; Tremmel, Georg; Miyano, Satoru; Stephens, Matthew
2015-01-01
Recent advances in sequencing technologies have enabled the production of massive amounts of data on somatic mutations from cancer genomes. These data have led to the detection of characteristic patterns of somatic mutations or “mutation signatures” at an unprecedented resolution, with the potential for new insights into the causes and mechanisms of tumorigenesis. Here we present new methods for modelling, identifying and visualizing such mutation signatures. Our methods greatly simplify mutation signature models compared with existing approaches, reducing the number of parameters by orders of magnitude even while increasing the contextual factors (e.g. the number of flanking bases) that are accounted for. This improves both sensitivity and robustness of inferred signatures. We also provide a new intuitive way to visualize the signatures, analogous to the use of sequence logos to visualize transcription factor binding sites. We illustrate our new method on somatic mutation data from urothelial carcinoma of the upper urinary tract, and a larger dataset from 30 diverse cancer types. The results illustrate several important features of our methods, including the ability of our new visualization tool to clearly highlight the key features of each signature, the improved robustness of signature inferences from small sample sizes, and more detailed inference of signature characteristics such as strand biases and sequence context effects at the base two positions 5′ to the mutated site. The overall framework of our work is based on probabilistic models that are closely connected with “mixed-membership models” which are widely used in population genetic admixture analysis, and in machine learning for document clustering. We argue that recognizing these relationships should help improve understanding of mutation signature extraction problems, and suggests ways to further improve the statistical methods. Our methods are implemented in an R package pmsignature (https://github.com/friend1ws/pmsignature) and a web application available at https://friend1ws.shinyapps.io/pmsignature_shiny/. PMID:26630308
Infant Visual Expectations: Advances and Issues.
ERIC Educational Resources Information Center
Haith, Marshall M.; Wass, Tara S.; Adler, Scott A.
1997-01-01
Speculates on underlying processes for the reaction time variance and age differences in anticipation latency using the Visual Expectation Paradigm. Discusses the dichotomization of reactive and anticipatory behavior, limitations of longitudinal designs, drawbacks in using standard procedures and materials, and inferences that can be made…
Filling in the Gaps: Memory Implications for Inferring Missing Content in Graphic Narratives
ERIC Educational Resources Information Center
Magliano, Joseph P.; Kopp, Kristopher; Higgs, Karyn; Rapp, David N.
2017-01-01
Visual narratives, including graphic novels, illustrated instructions, and picture books, convey event sequences constituting a plot but cannot depict all events that make up the plot. Viewers must generate inferences that fill the gaps between explicitly shown images. This study explored the inferential products and memory implications of…
Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E
2016-01-01
Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.
Erdogan, Goker; Yildirim, Ilker; Jacobs, Robert A.
2015-01-01
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models—that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model’s percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects’ ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception. PMID:26554704
Color inference in visual communication: the meaning of colors in recycling.
Schloss, Karen B; Lessard, Laurent; Walmsley, Charlotte S; Foley, Kathleen
2018-01-01
People interpret abstract meanings from colors, which makes color a useful perceptual feature for visual communication. This process is complicated, however, because there is seldom a one-to-one correspondence between colors and meanings. One color can be associated with many different concepts (one-to-many mapping) and many colors can be associated with the same concept (many-to-one mapping). We propose that to interpret color-coding systems, people perform assignment inference to determine how colors map onto concepts. We studied assignment inference in the domain of recycling. Participants saw images of colored but unlabeled bins and were asked to indicate which bins they would use to discard different kinds of recyclables and trash. In Experiment 1, we tested two hypotheses for how people perform assignment inference. The local assignment hypothesis predicts that people simply match objects with their most strongly associated color. The global assignment hypothesis predicts that people also account for the association strengths between all other objects and colors within the scope of the color-coding system. Participants discarded objects in bins that optimized the color-object associations of the entire set, which is consistent with the global assignment hypothesis. This sometimes resulted in discarding objects in bins whose colors were weakly associated with the object, even when there was a stronger associated option available. In Experiment 2, we tested different methods for encoding color-coding systems and found that people were better at assignment inference when color sets simultaneously maximized the association strength between assigned color-object parings while minimizing associations between unassigned pairings. Our study provides an approach for designing intuitive color-coding systems that facilitate communication through visual media such as graphs, maps, signs, and artifacts.
What are they up to? The role of sensory evidence and prior knowledge in action understanding.
Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé
2011-02-18
Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations--acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that "intention" is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation.
Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G
2010-01-01
This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.
Right Hemisphere Dominance in Visual Statistical Learning
ERIC Educational Resources Information Center
Roser, Matthew E.; Fiser, Jozsef; Aslin, Richard N.; Gazzaniga, Michael S.
2011-01-01
Several studies report a right hemisphere advantage for visuospatial integration and a left hemisphere advantage for inferring conceptual knowledge from patterns of covariation. The present study examined hemispheric asymmetry in the implicit learning of new visual feature combinations. A split-brain patient and normal control participants viewed…
NASA Technical Reports Server (NTRS)
1972-01-01
The growth of common as well as emerging visual display technologies are surveyed. The major inference is that contemporary society is rapidly growing evermore reliant on visual display for a variety of purposes. Because of its unique mission requirements, the National Aeronautics and Space Administration has contributed in an important and specific way to the growth of visual display technology. These contributions are characterized by the use of computer-driven visual displays to provide an enormous amount of information concisely, rapidly and accurately.
Enhancing the Teaching and Learning of Mathematical Visual Images
ERIC Educational Resources Information Center
Quinnell, Lorna
2014-01-01
The importance of mathematical visual images is indicated by the introductory paragraph in the Statistics and Probability content strand of the Australian Curriculum, which draws attention to the importance of learners developing skills to analyse and draw inferences from data and "represent, summarise and interpret data and undertake…
Representing and Inferring Visual Perceptual Skills in Dermatological Image Understanding
ERIC Educational Resources Information Center
Li, Rui
2013-01-01
Experts have a remarkable capability of locating, perceptually organizing, identifying, and categorizing objects in images specific to their domains of expertise. Eliciting and representing their visual strategies and some aspects of domain knowledge will benefit a wide range of studies and applications. For example, image understanding may be…
Convex Clustering: An Attractive Alternative to Hierarchical Clustering
Chen, Gary K.; Chi, Eric C.; Ranola, John Michael O.; Lange, Kenneth
2015-01-01
The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/ PMID:25965340
Convex clustering: an attractive alternative to hierarchical clustering.
Chen, Gary K; Chi, Eric C; Ranola, John Michael O; Lange, Kenneth
2015-05-01
The primary goal in cluster analysis is to discover natural groupings of objects. The field of cluster analysis is crowded with diverse methods that make special assumptions about data and address different scientific aims. Despite its shortcomings in accuracy, hierarchical clustering is the dominant clustering method in bioinformatics. Biologists find the trees constructed by hierarchical clustering visually appealing and in tune with their evolutionary perspective. Hierarchical clustering operates on multiple scales simultaneously. This is essential, for instance, in transcriptome data, where one may be interested in making qualitative inferences about how lower-order relationships like gene modules lead to higher-order relationships like pathways or biological processes. The recently developed method of convex clustering preserves the visual appeal of hierarchical clustering while ameliorating its propensity to make false inferences in the presence of outliers and noise. The solution paths generated by convex clustering reveal relationships between clusters that are hidden by static methods such as k-means clustering. The current paper derives and tests a novel proximal distance algorithm for minimizing the objective function of convex clustering. The algorithm separates parameters, accommodates missing data, and supports prior information on relationships. Our program CONVEXCLUSTER incorporating the algorithm is implemented on ATI and nVidia graphics processing units (GPUs) for maximal speed. Several biological examples illustrate the strengths of convex clustering and the ability of the proximal distance algorithm to handle high-dimensional problems. CONVEXCLUSTER can be freely downloaded from the UCLA Human Genetics web site at http://www.genetics.ucla.edu/software/.
Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics
Girshick, Ahna R.; Landy, Michael S.; Simoncelli, Eero P.
2011-01-01
Humans are remarkably good at performing visual tasks, but experimental measurements reveal substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. But such inference is optimal only if the observer’s internal model matches the environment. Here, we provide evidence that this is the case. We measured performance in an orientation-estimation task, demonstrating the well-known fact that orientation judgements are more accurate at cardinal (horizontal and vertical) orientations, along with a new observation that judgements made under conditions of uncertainty are strongly biased toward cardinal orientations. We estimate observers’ internal models for orientation and find that they match the local orientation distribution measured in photographs. We also show how a neural population could embed probabilistic information responsible for such biases. PMID:21642976
Modeling the Round Earth through Diagrams
NASA Astrophysics Data System (ADS)
Padalkar, Shamin; Ramadas, Jayashree
Earlier studies have found that students, including adults, have problems understanding the scientifically accepted model of the Sun-Earth-Moon system and explaining day-to-day astronomical phenomena based on it. We have been examining such problems in the context of recent research on visual-spatial reasoning. Working with middle school students in India, we have developed a pedagogical sequence to build the mental model of the Earth and tried it in three schools for socially and educationally disadvantaged students. This pedagogy was developed on the basis of (1) a reading of current research in imagery and visual-spatial reasoning and (2) students' difficulties identified during the course of pretests and interviews. Visual-spatial tools such as concrete (physical) models, gestures, and diagrams are used extensively in the teaching sequence. The building of a mental model is continually integrated with drawing inferences to understand and explain everyday phenomena. The focus of this article is inferences drawn with diagrams.
The BioCyc collection of microbial genomes and metabolic pathways.
Karp, Peter D; Billington, Richard; Caspi, Ron; Fulcher, Carol A; Latendresse, Mario; Kothari, Anamika; Keseler, Ingrid M; Krummenacker, Markus; Midford, Peter E; Ong, Quang; Ong, Wai Kit; Paley, Suzanne M; Subhraveti, Pallavi
2017-08-17
BioCyc.org is a microbial genome Web portal that combines thousands of genomes with additional information inferred by computer programs, imported from other databases and curated from the biomedical literature by biologist curators. BioCyc also provides an extensive range of query tools, visualization services and analysis software. Recent advances in BioCyc include an expansion in the content of BioCyc in terms of both the number of genomes and the types of information available for each genome; an expansion in the amount of curated content within BioCyc; and new developments in the BioCyc software tools including redesigned gene/protein pages and metabolite pages; new search tools; a new sequence-alignment tool; a new tool for visualizing groups of related metabolic pathways; and a facility called SmartTables, which enables biologists to perform analyses that previously would have required a programmer's assistance. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Benard, Julie; Giurfa, Martin
2004-01-01
We asked whether honeybees, "Apis mellifera," could solve a transitive inference problem. Individual free-flying bees were conditioned with four overlapping premise pairs of five visual patterns in a multiple discrimination task (A+ vs. B-, B+ vs. C-, C+ vs. D-, D+ vs. E-, where + and - indicate sucrose reward or absence of it,…
Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.
Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna
2016-09-01
Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.
Specificity and timescales of cortical adaptation as inferences about natural movie statistics.
Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia
2016-10-01
Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation.
Specificity and timescales of cortical adaptation as inferences about natural movie statistics
Snow, Michoel; Coen-Cagli, Ruben; Schwartz, Odelia
2016-01-01
Adaptation is a phenomenological umbrella term under which a variety of temporal contextual effects are grouped. Previous models have shown that some aspects of visual adaptation reflect optimal processing of dynamic visual inputs, suggesting that adaptation should be tuned to the properties of natural visual inputs. However, the link between natural dynamic inputs and adaptation is poorly understood. Here, we extend a previously developed Bayesian modeling framework for spatial contextual effects to the temporal domain. The model learns temporal statistical regularities of natural movies and links these statistics to adaptation in primary visual cortex via divisive normalization, a ubiquitous neural computation. In particular, the model divisively normalizes the present visual input by the past visual inputs only to the degree that these are inferred to be statistically dependent. We show that this flexible form of normalization reproduces classical findings on how brief adaptation affects neuronal selectivity. Furthermore, prior knowledge acquired by the Bayesian model from natural movies can be modified by prolonged exposure to novel visual stimuli. We show that this updating can explain classical results on contrast adaptation. We also simulate the recent finding that adaptation maintains population homeostasis, namely, a balanced level of activity across a population of neurons with different orientation preferences. Consistent with previous disparate observations, our work further clarifies the influence of stimulus-specific and neuronal-specific normalization signals in adaptation. PMID:27699416
The extent of visual space inferred from perspective angles
Erkelens, Casper J.
2015-01-01
Retinal images are perspective projections of the visual environment. Perspective projections do not explain why we perceive perspective in 3-D space. Analysis of underlying spatial transformations shows that visual space is a perspective transformation of physical space if parallel lines in physical space vanish at finite distance in visual space. Perspective angles, i.e., the angle perceived between parallel lines in physical space, were estimated for rails of a straight railway track. Perspective angles were also estimated from pictures taken from the same point of view. Perspective angles between rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points of visual space. All computed distances were shorter than 6 m. The shallow depth of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. Incongruity between the perceived shape of a railway line on the one hand and the experienced ratio between width and length of the line on the other hand is huge, but apparently so unobtrusive that it has remained unnoticed. The incompatibility between perspective angles and perceived distances casts doubt on evidence for a curved visual space that has been presented in the literature and was obtained from combining judgments of distances and angles with physical positions. PMID:26034567
Ferguson, Heather J; Apperly, Ian; Ahmad, Jumana; Bindemann, Markus; Cane, James
2015-06-01
Interpreting other peoples' actions relies on an understanding of their current mental states (e.g. beliefs, desires and intentions). In this paper, we distinguish between listeners' ability to infer others' perspectives and their explicit use of this knowledge to predict subsequent actions. In a visual-world study, two groups of participants (passive observers vs. active participants) watched short videos, depicting transfer events, where one character ('Jane') either held a true or false belief about an object's location. We tracked participants' eye-movements around the final visual scene, time-locked to related auditory descriptions (e.g. "Jane will look for the chocolates in the container on the left".). Results showed that active participants had already inferred the character's belief in the 1s preview period prior to auditory onset, before it was possible to use this information to predict an outcome. Moreover, they used this inference to correctly anticipate reference to the object's initial location on false belief trials at the earliest possible point (i.e. from "Jane" onwards). In contrast, passive observers only showed evidence of a belief inference from the onset of "Jane", and did not show reliable use of this inference to predict Jane's behaviour on false belief trials until much later, when the location ("left/right") was auditorily available. These results show that active engagement in a task activates earlier inferences about others' perspectives, and drives immediate use of this information to anticipate others' actions, compared to passive observers, who are susceptible to influences from egocentric or reality biases. Finally, we review evidence that using other peoples' perspectives to predict their behaviour is more cognitively effortful than simply using one's own. Copyright © 2015 Elsevier B.V. All rights reserved.
Network portal: a database for storage, analysis and visualization of biological networks
Turkarslan, Serdar; Wurtmann, Elisabeth J.; Wu, Wei-Ju; Jiang, Ning; Bare, J. Christopher; Foley, Karen; Reiss, David J.; Novichkov, Pavel; Baliga, Nitin S.
2014-01-01
The ease of generating high-throughput data has enabled investigations into organismal complexity at the systems level through the inference of networks of interactions among the various cellular components (genes, RNAs, proteins and metabolites). The wider scientific community, however, currently has limited access to tools for network inference, visualization and analysis because these tasks often require advanced computational knowledge and expensive computing resources. We have designed the network portal (http://networks.systemsbiology.net) to serve as a modular database for the integration of user uploaded and public data, with inference algorithms and tools for the storage, visualization and analysis of biological networks. The portal is fully integrated into the Gaggle framework to seamlessly exchange data with desktop and web applications and to allow the user to create, save and modify workspaces, and it includes social networking capabilities for collaborative projects. While the current release of the database contains networks for 13 prokaryotic organisms from diverse phylogenetic clades (4678 co-regulated gene modules, 3466 regulators and 9291 cis-regulatory motifs), it will be rapidly populated with prokaryotic and eukaryotic organisms as relevant data become available in public repositories and through user input. The modular architecture, simple data formats and open API support community development of the portal. PMID:24271392
Modeling human pilot cue utilization with applications to simulator fidelity assessment.
Zeyada, Y; Hess, R A
2000-01-01
An analytical investigation to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator was undertaken. Data from a NASA Ames Research Center vertical motion simulator study of a simple, single-degree-of-freedom rotorcraft bob-up/down maneuver were employed in the investigation. The study was part of a larger research effort that has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system that included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle, and the motion system. With the exception of time delays that accrued in visual scene production in the simulator, visual scene effects were not included in this study. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity that occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots who participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to identify changes in simulator fidelity for the task examined.
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations. The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy-inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzy-inference identification can be used to reflect changes in simulator fidelity for the task examined.
Visual guidance of mobile platforms
NASA Astrophysics Data System (ADS)
Blissett, Rodney J.
1993-12-01
Two systems are described and results presented demonstrating aspects of real-time visual guidance of autonomous mobile platforms. The first approach incorporates prior knowledge in the form of rigid geometrical models linking visual references within the environment. The second approach is based on a continuous synthesis of information extracted from image tokens to generate a coarse-grained world model, from which potential obstacles are inferred. The use of these techniques in workplace applications is discussed.
Explaining seeing? Disentangling qualia from perceptual organization.
Ibáñez, Agustin; Bekinschtein, Tristan
2010-09-01
Abstract Visual perception and integration seem to play an essential role in our conscious phenomenology. Relatively local neural processing of reentrant nature may explain several visual integration processes (feature binding or figure-ground segregation, object recognition, inference, competition), even without attention or cognitive control. Based on the above statements, should the neural signatures of visual integration (via reentrant process) be non-reportable phenomenological qualia? We argue that qualia are not required to understand this perceptual organization.
Cognitive and psychological science insights to improve climate change data visualization
NASA Astrophysics Data System (ADS)
Harold, Jordan; Lorenzoni, Irene; Shipley, Thomas F.; Coventry, Kenny R.
2016-12-01
Visualization of climate data plays an integral role in the communication of climate change findings to both expert and non-expert audiences. The cognitive and psychological sciences can provide valuable insights into how to improve visualization of climate data based on knowledge of how the human brain processes visual and linguistic information. We review four key research areas to demonstrate their potential to make data more accessible to diverse audiences: directing visual attention, visual complexity, making inferences from visuals, and the mapping between visuals and language. We present evidence-informed guidelines to help climate scientists increase the accessibility of graphics to non-experts, and illustrate how the guidelines can work in practice in the context of Intergovernmental Panel on Climate Change graphics.
What Are They Up To? The Role of Sensory Evidence and Prior Knowledge in Action Understanding
Chambon, Valerian; Domenech, Philippe; Pacherie, Elisabeth; Koechlin, Etienne; Baraduc, Pierre; Farrer, Chlöé
2011-01-01
Explaining or predicting the behaviour of our conspecifics requires the ability to infer the intentions that motivate it. Such inferences are assumed to rely on two types of information: (1) the sensory information conveyed by movement kinematics and (2) the observer's prior expectations – acquired from past experience or derived from prior knowledge. However, the respective contribution of these two sources of information is still controversial. This controversy stems in part from the fact that “intention” is an umbrella term that may embrace various sub-types each being assigned different scopes and targets. We hypothesized that variations in the scope and target of intentions may account for variations in the contribution of visual kinematics and prior knowledge to the intention inference process. To test this hypothesis, we conducted four behavioural experiments in which participants were instructed to identify different types of intention: basic intentions (i.e. simple goal of a motor act), superordinate intentions (i.e. general goal of a sequence of motor acts), or social intentions (i.e. intentions accomplished in a context of reciprocal interaction). For each of the above-mentioned intentions, we varied (1) the amount of visual information available from the action scene and (2) participant's prior expectations concerning the intention that was more likely to be accomplished. First, we showed that intentional judgments depend on a consistent interaction between visual information and participant's prior expectations. Moreover, we demonstrated that this interaction varied according to the type of intention to be inferred, with participant's priors rather than perceptual evidence exerting a greater effect on the inference of social and superordinate intentions. The results are discussed by appealing to the specific properties of each type of intention considered and further interpreted in the light of a hierarchical model of action representation. PMID:21364992
The Effects of Distance and Intervening Obstacles on Visual Inference in Blind and Sighted Children.
ERIC Educational Resources Information Center
Bigelow, Ann E.
1991-01-01
Blind and visually impaired children, and children with normal sight, were asked whether an observer could see a toy from varying distances under conditions in which obstacles did or did not intervene between the toy and the observer. Blind children took longer than other children to master the task. (BC)
Audio-Visual Speaker Diarization Based on Spatiotemporal Bayesian Fusion.
Gebru, Israel D; Ba, Sileye; Li, Xiaofei; Horaud, Radu
2018-05-01
Speaker diarization consists of assigning speech signals to people engaged in a dialogue. An audio-visual spatiotemporal diarization model is proposed. The model is well suited for challenging scenarios that consist of several participants engaged in multi-party interaction while they move around and turn their heads towards the other participants rather than facing the cameras and the microphones. Multiple-person visual tracking is combined with multiple speech-source localization in order to tackle the speech-to-person association problem. The latter is solved within a novel audio-visual fusion method on the following grounds: binaural spectral features are first extracted from a microphone pair, then a supervised audio-visual alignment technique maps these features onto an image, and finally a semi-supervised clustering method assigns binaural spectral features to visible persons. The main advantage of this method over previous work is that it processes in a principled way speech signals uttered simultaneously by multiple persons. The diarization itself is cast into a latent-variable temporal graphical model that infers speaker identities and speech turns, based on the output of an audio-visual association process, executed at each time slice, and on the dynamics of the diarization variable itself. The proposed formulation yields an efficient exact inference procedure. A novel dataset, that contains audio-visual training data as well as a number of scenarios involving several participants engaged in formal and informal dialogue, is introduced. The proposed method is thoroughly tested and benchmarked with respect to several state-of-the art diarization algorithms.
Hominoid visual brain structure volumes and the position of the lunate sulcus.
de Sousa, Alexandra A; Sherwood, Chet C; Mohlberg, Hartmut; Amunts, Katrin; Schleicher, Axel; MacLeod, Carol E; Hof, Patrick R; Frahm, Heiko; Zilles, Karl
2010-04-01
It has been argued that changes in the relative sizes of visual system structures predated an increase in brain size and provide evidence of brain reorganization in hominins. However, data about the volume and anatomical limits of visual brain structures in the extant taxa phylogenetically closest to humans-the apes-remain scarce, thus complicating tests of hypotheses about evolutionary changes. Here, we analyze new volumetric data for the primary visual cortex and the lateral geniculate nucleus to determine whether or not the human brain departs from allometrically-expected patterns of brain organization. Primary visual cortex volumes were compared to lunate sulcus position in apes to investigate whether or not inferences about brain reorganization made from fossil hominin endocasts are reliable in this context. In contrast to previous studies, in which all species were relatively poorly sampled, the current study attempted to evaluate the degree of intraspecific variability by including numerous hominoid individuals (particularly Pan troglodytes and Homo sapiens). In addition, we present and compare volumetric data from three new hominoid species-Pan paniscus, Pongo pygmaeus, and Symphalangus syndactylus. These new data demonstrate that hominoid visual brain structure volumes vary more than previously appreciated. In addition, humans have relatively reduced primary visual cortex and lateral geniculate nucleus volumes as compared to allometric predictions from other hominoids. These results suggest that inferences about the position of the lunate sulcus on fossil endocasts may provide information about brain organization. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brandstetter, Miriam; Sandmann, Angela; Florian, Christine
2017-06-01
In classroom, scientific contents are increasingly communicated through visual forms of representations. Students' learning outcomes rely on their ability to read and understand pictorial information. Understanding pictorial information in biology requires cognitive effort and can be challenging to students. Yet evidence-based knowledge about students' visual reading strategies during the process of understanding pictorial information is pending. Therefore, 42 students at the age of 14-15 were asked to think aloud while trying to understand visual representations of the blood circulatory system and the patellar reflex. A category system was developed differentiating 16 categories of cognitive activities. A Principal Component Analysis revealed two underlying patterns of activities that can be interpreted as visual reading strategies: 1. Inferences predominated by using a problem-solving schema; 2. Inferences predominated by recall of prior content knowledge. Each pattern consists of a specific set of cognitive activities that reflect selection, organisation and integration of pictorial information as well as different levels of expertise. The results give detailed insights into cognitive activities of students who were required to understand the pictorial information of complex organ systems. They provide an evidence-based foundation to derive instructional aids that can promote students pictorial-information-based learning on different levels of expertise.
Young children's recall and reconstruction of audio and audiovisual narratives.
Gibbons, J; Anderson, D R; Smith, R; Field, D E; Fischer, C
1986-08-01
It has been claimed that the visual component of audiovisual media dominates young children's cognitive processing. This experiment examines the effects of input modality while controlling the complexity of the visual and auditory content and while varying the comprehension task (recall vs. reconstruction). 4- and 7-year-olds were presented brief stories through either audio or audiovisual media. The audio version consisted of narrated character actions and character utterances. The narrated actions were matched to the utterances on the basis of length and propositional complexity. The audiovisual version depicted the actions visually by means of stop animation instead of by auditory narrative statements. The character utterances were the same in both versions. Audiovisual input produced superior performance on explicit information in the 4-year-olds and produced more inferences at both ages. Because performance on utterances was superior in the audiovisual condition as compared to the audio condition, there was no evidence that visual input inhibits processing of auditory information. Actions were more likely to be produced by the younger children than utterances, regardless of input medium, indicating that prior findings of visual dominance may have been due to the salience of narrative action. Reconstruction, as compared to recall, produced superior depiction of actions at both ages as well as more constrained relevant inferences and narrative conventions.
Learning and Inductive Inference
1982-07-01
a set of graph grammars to describe visual scenes . Other researchers have applied graph grammars to the pattern recognition of handwritten characters...345 1. Issues / 345 2. Mostows’ operationalizer / 350 0. Learning from ezamples / 360 1. Issues / 3t60 2. Learning in control and pattern recognition ...art.icleis on rote learntinig and ailvice- tAik g. K(ennieth Clarkson contributed Ltte article on grmvit atical inference, anid Geoff’ lroiney wrote
Visual pigments of marine carnivores: pinnipeds, polar bear, and sea otter.
Levenson, David H; Ponganis, Paul J; Crognale, Michael A; Deegan, Jess F; Dizon, Andy; Jacobs, Gerald H
2006-08-01
Rod and cone visual pigments of 11 marine carnivores were evaluated. Rod, middle/long-wavelength sensitive (M/L) cone, and short-wavelength sensitive (S) cone opsin (if present) sequences were obtained from retinal mRNA. Spectral sensitivity was inferred through evaluation of known spectral tuning residues. The rod pigments of all but one of the pinnipeds were similar to those of the sea otter, polar bear, and most other terrestrial carnivores with spectral peak sensitivities (lambda(max)) of 499 or 501 nm. Similarly, the M/L cone pigments of the pinnipeds, polar bear, and otter had inferred lambda(max) of 545 to 560 nm. Only the rod opsin sequence of the elephant seal had sensitivity characteristic of adaptation for vision in the marine environment, with an inferred lambda(max) of 487 nm. No evidence of S cones was found for any of the pinnipeds. The polar bear and otter had S cones with inferred lambda(max) of approximately 440 nm. Flicker-photometric ERG was additionally used to examine the in situ sensitivities of three species of pinniped. Despite the use of conditions previously shown to evoke cone responses in other mammals, no cone responses could be elicited from any of these pinnipeds. Rod photoreceptor responses for all three species were as predicted by the genetic data.
Improved probabilistic inference as a general learning mechanism with action video games.
Green, C Shawn; Pouget, Alexandre; Bavelier, Daphne
2010-09-14
Action video game play benefits performance in an array of sensory, perceptual, and attentional tasks that go well beyond the specifics of game play [1-9]. That a training regimen may induce improvements in so many different skills is notable because the majority of studies on training-induced learning report improvements on the trained task but limited transfer to other, even closely related, tasks ([10], but see also [11-13]). Here we ask whether improved probabilistic inference may explain such broad transfer. By using a visual perceptual decision making task [14, 15], the present study shows for the first time that action video game experience does indeed improve probabilistic inference. A neural model of this task [16] establishes how changing a single parameter, namely the strength of the connections between the neural layer providing the momentary evidence and the layer integrating the evidence over time, captures improvements in action-gamers behavior. These results were established in a visual, but also in a novel auditory, task, indicating generalization across modalities. Thus, improved probabilistic inference provides a general mechanism for why action video game playing enhances performance in a wide variety of tasks. In addition, this mechanism may serve as a signature of training regimens that are likely to produce transfer of learning. Copyright © 2010 Elsevier Ltd. All rights reserved.
Difference to Inference: teaching logical and statistical reasoning through on-line interactivity.
Malloy, T E
2001-05-01
Difference to Inference is an on-line JAVA program that simulates theory testing and falsification through research design and data collection in a game format. The program, based on cognitive and epistemological principles, is designed to support learning of the thinking skills underlying deductive and inductive logic and statistical reasoning. Difference to Inference has database connectivity so that game scores can be counted as part of course grades.
ERIC Educational Resources Information Center
Chatterjee, Jharna; McCarrey, Michael
1989-01-01
Investigates the relationship between inferred sex role attitudes and women's participation in traditional versus nontraditional training programs. Examines the association between women's participation in a training program and their anticipation of difficulties in pursuing a nontraditional career. Examines performance differences by women with…
The Effect of Using Dynamic Mathematics Software: Cross Section and Visualization
ERIC Educational Resources Information Center
Kösa, Temel
2016-01-01
The main purpose of this study is to determine the effects of using dynamic mathematics software on pre-service mathematics teachers' ability to infer the shape of a cross section of a three-dimensional solid, as well as on their spatial visualization skills. The study employed a quasi-experimental design with a control group; the Purdue Spatial…
Phylo.io: Interactive Viewing and Comparison of Large Phylogenetic Trees on the Web.
Robinson, Oscar; Dylus, David; Dessimoz, Christophe
2016-08-01
Phylogenetic trees are pervasively used to depict evolutionary relationships. Increasingly, researchers need to visualize large trees and compare multiple large trees inferred for the same set of taxa (reflecting uncertainty in the tree inference or genuine discordance among the loci analyzed). Existing tree visualization tools are however not well suited to these tasks. In particular, side-by-side comparison of trees can prove challenging beyond a few dozen taxa. Here, we introduce Phylo.io, a web application to visualize and compare phylogenetic trees side-by-side. Its distinctive features are: highlighting of similarities and differences between two trees, automatic identification of the best matching rooting and leaf order, scalability to large trees, high usability, multiplatform support via standard HTML5 implementation, and possibility to store and share visualizations. The tool can be freely accessed at http://phylo.io and can easily be embedded in other web servers. The code for the associated JavaScript library is available at https://github.com/DessimozLab/phylo-io under an MIT open source license. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Hegarty, Mary; Canham, Matt S; Fabrikant, Sara I
2010-01-01
Three experiments examined how bottom-up and top-down processes interact when people view and make inferences from complex visual displays (weather maps). Bottom-up effects of display design were investigated by manipulating the relative visual salience of task-relevant and task-irrelevant information across different maps. Top-down effects of domain knowledge were investigated by examining performance and eye fixations before and after participants learned relevant meteorological principles. Map design and knowledge interacted such that salience had no effect on performance before participants learned the meteorological principles; however, after learning, participants were more accurate if they viewed maps that made task-relevant information more visually salient. Effects of display design on task performance were somewhat dissociated from effects of display design on eye fixations. The results support a model in which eye fixations are directed primarily by top-down factors (task and domain knowledge). They suggest that good display design facilitates performance not just by guiding where viewers look in a complex display but also by facilitating processing of the visual features that represent task-relevant information at a given display location. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
Integrated Approach to Reconstruction of Microbial Regulatory Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodionov, Dmitry A; Novichkov, Pavel S
2013-11-04
This project had the goal(s) of development of integrated bioinformatics platform for genome-scale inference and visualization of transcriptional regulatory networks (TRNs) in bacterial genomes. The work was done in Sanford-Burnham Medical Research Institute (SBMRI, P.I. D.A. Rodionov) and Lawrence Berkeley National Laboratory (LBNL, co-P.I. P.S. Novichkov). The developed computational resources include: (1) RegPredict web-platform for TRN inference and regulon reconstruction in microbial genomes, and (2) RegPrecise database for collection, visualization and comparative analysis of transcriptional regulons reconstructed by comparative genomics. These analytical resources were selected as key components in the DOE Systems Biology KnowledgeBase (SBKB). The high-quality data accumulated inmore » RegPrecise will provide essential datasets of reference regulons in diverse microbes to enable automatic reconstruction of draft TRNs in newly sequenced genomes. We outline our progress toward the three aims of this grant proposal, which were: Develop integrated platform for genome-scale regulon reconstruction; Infer regulatory annotations in several groups of bacteria and building of reference collections of microbial regulons; and Develop KnowledgeBase on microbial transcriptional regulation.« less
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
Inferring Network Controls from Topology Using the Chomp Database
2015-12-03
AFRL-AFOSR-VA-TR-2016-0033 INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE John Harer DUKE UNIVERSITY Final Report 12/03/2015...INFERRING NETWORK CONTROLS FROM TOPOLOGY USING THE CHOMP DATABASE 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA9550-10-1-0436 5c. PROGRAM ELEMENT NUMBER 6...area of Topological Data Analysis (TDA) and it’s application to dynamical systems. The role of this work in the Complex Networks program is based on
Congruent and Opposite Neurons as Partners in Multisensory Integration and Segregation
NASA Astrophysics Data System (ADS)
Zhang, Wen-Hao; Wong, K. Y. Michael; Wang, He; Wu, Si
Experiments revealed that where visual and vestibular cues are integrated to infer heading direction in the brain, there are two types of neurons with roughly the same number. Respectively, congruent and opposite cells respond similarly and oppositely to visual and vestibular cues. Congruent neurons are known to be responsible for cue integration, but the computational role of opposite neurons remains largely unknown. We propose that opposite neurons may serve to encode the disparity information between cues necessary for multisensory segregation. We build a computational model composed of two reciprocally coupled modules, each consisting of groups of congruent and opposite neurons. Our model reproduces the characteristics of congruent and opposite neurons, and demonstrates that in each module, congruent and opposite neurons can jointly achieve optimal multisensory information integration and segregation. This study sheds light on our understanding of how the brain implements optimal multisensory integration and segregation concurrently in a distributed manner. This work is supported by the Research Grants Council of Hong Kong (N _HKUST606/12, 605813, and 16322616) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).
Flow structures around a beetle in a tethered flight
NASA Astrophysics Data System (ADS)
Lee, Boogeon; Oh, Sehyeong; Park, Hyungmin; Choi, Haecheon
2017-11-01
In the present study, through a wind-tunnel experiment, we visualize the flow in a tethered flight of a rhinoceros beetle using a smoke-wire visualization technique. Measurements are done at five side planes along the wind span while varying the body angle (angle between the horizontal and the body axis) to investigate the influence of the stroke plane angle that was observed to change depending on the flight mode such as hovering, forward and takeoff flights so on. Observing that a large attached leading-edge vortex is only found on the hindwing, it is inferred that most of the aerodynamic forces would be generated by hindwings (flexible inner wings) compared to the elytra (hard outer wings). In addition, it is observed to use unsteady lift-generating mechanisms such as clap-and-fling, wing-wing interaction and wake capture. Finally, we discuss the relation between the advance ratio and Strouhal number by adjusting free-stream velocity and the body angle (i.e., angle of wake-induced flow). Supported by a Grant to Bio-Mimetic Robot Research Center Funded by Defense Acquisition Program Administration, and by ADD, Korea (UD130070ID).
Embodied learning of a generative neural model for biological motion perception and inference
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215
Embodied learning of a generative neural model for biological motion perception and inference.
Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V
2015-01-01
Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.
Nawroth, Christian; von Borell, Eberhard
2015-05-01
Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.
System and method for creating expert systems
NASA Technical Reports Server (NTRS)
Hughes, Peter M. (Inventor); Luczak, Edward C. (Inventor)
1998-01-01
A system and method provides for the creation of a highly graphical expert system without the need for programming in code. An expert system is created by initially building a data interface, defining appropriate Mission, User-Defined, Inferred, and externally-generated GenSAA (EGG) data variables whose data values will be updated and input into the expert system. Next, rules of the expert system are created by building appropriate conditions of the rules which must be satisfied and then by building appropriate actions of rules which are to be executed upon corresponding conditions being satisfied. Finally, an appropriate user interface is built which can be highly graphical in nature and which can include appropriate message display and/or modification of display characteristics of a graphical display object, to visually alert a user of the expert system of varying data values, upon conditions of a created rule being satisfied. The data interface building, rule building, and user interface building are done in an efficient manner and can be created without the need for programming in code.
ERIC Educational Resources Information Center
Moores, Elisabeth; Cassim, Rizan; Talcott, Joel B.
2011-01-01
Difficulties in visual attention are increasingly being linked to dyslexia. To date, the majority of studies have inferred functionality of attention from response times to stimuli presented for an indefinite duration. However, in paradigms that use reaction times to investigate the ability to orient attention, a delayed reaction time could also…
Task Specificity and the Influence of Memory on Visual Search: Comment on Vo and Wolfe (2012)
ERIC Educational Resources Information Center
Hollingworth, Andrew
2012-01-01
Recent results from Vo and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a…
Development and usage of a false color display technique for presenting Seasat-A scatterometer data
NASA Technical Reports Server (NTRS)
Jackson, C. B.
1980-01-01
A computer generated false color program which creates digital multicolor graphics to display geophysical surface parameters measured by the Seasat-A satellite scatterometer (SASS) is described. The data is incrementally scaled over the range of acceptable values and each increment and its data points are assigned a color. The advantage of the false color display is that it visually infers cool or weak data versus hot or intense data by using the rainbow of colors. For example, with wind speeds, levels of yellow and red could be used to imply high winds while green and blue could imply calmer air. The SASS data is sorted into geographic regions and the final false color images are projected onto various world maps with superimposed land/water boundaries.
Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi
2013-12-01
Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.
Graham, Susan A; San Juan, Valerie; Khu, Melanie
2017-05-01
When linguistic information alone does not clarify a speaker's intended meaning, skilled communicators can draw on a variety of cues to infer communicative intent. In this paper, we review research examining the developmental emergence of preschoolers' sensitivity to a communicative partner's perspective. We focus particularly on preschoolers' tendency to use cues both within the communicative context (i.e. a speaker's visual access to information) and within the speech signal itself (i.e. emotional prosody) to make on-line inferences about communicative intent. Our review demonstrates that preschoolers' ability to use visual and emotional cues of perspective to guide language interpretation is not uniform across tasks, is sometimes related to theory of mind and executive function skills, and, at certain points of development, is only revealed by implicit measures of language processing.
Neural correlates of species-typical illogical cognitive bias in human inference.
Ogawa, Akitoshi; Yamazaki, Yumiko; Ueno, Kenichi; Cheng, Kang; Iriki, Atsushi
2010-09-01
The ability to think logically is a hallmark of human intelligence, yet our innate inferential abilities are marked by implicit biases that often lead to illogical inference. For example, given AB ("if A then B"), people frequently but fallaciously infer the inverse, BA. This mode of inference, called symmetry, is logically invalid because, although it may be true, it is not necessarily true. Given pairs of conditional relations, such as AB and BC, humans reflexively perform two additional modes of inference: transitivity, whereby one (validly) infers AC; and equivalence, whereby one (invalidly) infers CA. In sharp contrast, nonhuman animals can handle transitivity but can rarely be made to acquire symmetry or equivalence. In the present study, human subjects performed logical and illogical inferences about the relations between abstract, visually presented figures while their brain activation was monitored with fMRI. The prefrontal, medial frontal, and intraparietal cortices were activated during all modes of inference. Additional activation in the precuneus and posterior parietal cortex was observed during transitivity and equivalence, which may reflect the need to retrieve the intermediate stimulus (B) from memory. Surprisingly, the patterns of brain activation in illogical and logical inference were very similar. We conclude that the observed inference-related fronto-parietal network is adapted for processing categorical, but not logical, structures of association among stimuli. Humans might prefer categorization over the memorization of logical structures in order to minimize the cognitive working memory load when processing large volumes of information.
Inferring causal molecular networks: empirical assessment through a community-based effort
Hill, Steven M.; Heiser, Laura M.; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K.; Carlin, Daniel E.; Zhang, Yang; Sokolov, Artem; Paull, Evan O.; Wong, Chris K.; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V.; Favorov, Alexander V.; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W.; Long, Byron L.; Noren, David P.; Bisberg, Alexander J.; Mills, Gordon B.; Gray, Joe W.; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A.; Fertig, Elana J.; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M.; Spellman, Paul T.; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach
2016-01-01
Inferring molecular networks is a central challenge in computational biology. However, it has remained unclear whether causal, rather than merely correlational, relationships can be effectively inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge that focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results constitute the most comprehensive assessment of causal network inference in a mammalian setting carried out to date and suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess the causal validity of inferred molecular networks. PMID:26901648
Inferring explicit weighted consensus networks to represent alternative evolutionary histories
2013-01-01
Background The advent of molecular biology techniques and constant increase in availability of genetic material have triggered the development of many phylogenetic tree inference methods. However, several reticulate evolution processes, such as horizontal gene transfer and hybridization, have been shown to blur the species evolutionary history by causing discordance among phylogenies inferred from different genes. Methods To tackle this problem, we hereby describe a new method for inferring and representing alternative (reticulate) evolutionary histories of species as an explicit weighted consensus network which can be constructed from a collection of gene trees with or without prior knowledge of the species phylogeny. Results We provide a way of building a weighted phylogenetic network for each of the following reticulation mechanisms: diploid hybridization, intragenic recombination and complete or partial horizontal gene transfer. We successfully tested our method on some synthetic and real datasets to infer the above-mentioned evolutionary events which may have influenced the evolution of many species. Conclusions Our weighted consensus network inference method allows one to infer, visualize and validate statistically major conflicting signals induced by the mechanisms of reticulate evolution. The results provided by the new method can be used to represent the inferred conflicting signals by means of explicit and easy-to-interpret phylogenetic networks. PMID:24359207
Active inference and robot control: a case study
Nizard, Ange; Friston, Karl; Pezzulo, Giovanni
2016-01-01
Active inference is a general framework for perception and action that is gaining prominence in computational and systems neuroscience but is less known outside these fields. Here, we discuss a proof-of-principle implementation of the active inference scheme for the control or the 7-DoF arm of a (simulated) PR2 robot. By manipulating visual and proprioceptive noise levels, we show under which conditions robot control under the active inference scheme is accurate. Besides accurate control, our analysis of the internal system dynamics (e.g. the dynamics of the hidden states that are inferred during the inference) sheds light on key aspects of the framework such as the quintessentially multimodal nature of control and the differential roles of proprioception and vision. In the discussion, we consider the potential importance of being able to implement active inference in robots. In particular, we briefly review the opportunities for modelling psychophysiological phenomena such as sensory attenuation and related failures of gain control, of the sort seen in Parkinson's disease. We also consider the fundamental difference between active inference and optimal control formulations, showing that in the former the heavy lifting shifts from solving a dynamical inverse problem to creating deep forward or generative models with dynamics, whose attracting sets prescribe desired behaviours. PMID:27683002
Saul: Towards Declarative Learning Based Programming
Kordjamshidi, Parisa; Roth, Dan; Wu, Hao
2015-01-01
We present Saul, a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints. We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction. PMID:26635465
Saul: Towards Declarative Learning Based Programming.
Kordjamshidi, Parisa; Roth, Dan; Wu, Hao
2015-07-01
We present Saul , a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints. We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction.
MAVEN-SA: Model-Based Automated Visualization for Enhanced Situation Awareness
2005-11-01
34 methods. But historically, as arts evolve, these how to methods become systematized and codified (e.g. the development and refinement of color theory ...schema (as necessary) 3. Draw inferences from new knowledge to support decision making process 33 Visual language theory suggests that humans process...informed by theories of learning. Over the years, many types of software have been developed to support student learning. The various types of
Automated Box-Cox Transformations for Improved Visual Encoding.
Maciejewski, Ross; Pattath, Avin; Ko, Sungahn; Hafen, Ryan; Cleveland, William S; Ebert, David S
2013-01-01
The concept of preconditioning data (utilizing a power transformation as an initial step) for analysis and visualization is well established within the statistical community and is employed as part of statistical modeling and analysis. Such transformations condition the data to various inherent assumptions of statistical inference procedures, as well as making the data more symmetric and easier to visualize and interpret. In this paper, we explore the use of the Box-Cox family of power transformations to semiautomatically adjust visual parameters. We focus on time-series scaling, axis transformations, and color binning for choropleth maps. We illustrate the usage of this transformation through various examples, and discuss the value and some issues in semiautomatically using these transformations for more effective data visualization.
The Limits of Shape Recognition following Late Emergence from Blindness.
McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud
2015-09-21
Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Predictive Coding or Evidence Accumulation? False Inference and Neuronal Fluctuations
Friston, Karl J.; Kleinschmidt, Andreas
2010-01-01
Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se. PMID:20369004
Mixed Initiative Visual Analytics Using Task-Driven Recommendations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, Kristin A.; Cramer, Nicholas O.; Israel, David
2015-12-07
Visual data analysis is composed of a collection of cognitive actions and tasks to decompose, internalize, and recombine data to produce knowledge and insight. Visual analytic tools provide interactive visual interfaces to data to support tasks involved in discovery and sensemaking, including forming hypotheses, asking questions, and evaluating and organizing evidence. Myriad analytic models can be incorporated into visual analytic systems, at the cost of increasing complexity in the analytic discourse between user and system. Techniques exist to increase the usability of interacting with such analytic models, such as inferring data models from user interactions to steer the underlying modelsmore » of the system via semantic interaction, shielding users from having to do so explicitly. Such approaches are often also referred to as mixed-initiative systems. Researchers studying the sensemaking process have called for development of tools that facilitate analytic sensemaking through a combination of human and automated activities. However, design guidelines do not exist for mixed-initiative visual analytic systems to support iterative sensemaking. In this paper, we present a candidate set of design guidelines and introduce the Active Data Environment (ADE) prototype, a spatial workspace supporting the analytic process via task recommendations invoked by inferences on user interactions within the workspace. ADE recommends data and relationships based on a task model, enabling users to co-reason with the system about their data in a single, spatial workspace. This paper provides an illustrative use case, a technical description of ADE, and a discussion of the strengths and limitations of the approach.« less
Bhaskar, Anand; Javanmard, Adel; Courtade, Thomas A; Tse, David
2017-03-15
Genetic variation in human populations is influenced by geographic ancestry due to spatial locality in historical mating and migration patterns. Spatial population structure in genetic datasets has been traditionally analyzed using either model-free algorithms, such as principal components analysis (PCA) and multidimensional scaling, or using explicit spatial probabilistic models of allele frequency evolution. We develop a general probabilistic model and an associated inference algorithm that unify the model-based and data-driven approaches to visualizing and inferring population structure. Our spatial inference algorithm can also be effectively applied to the problem of population stratification in genome-wide association studies (GWAS), where hidden population structure can create fictitious associations when population ancestry is correlated with both the genotype and the trait. Our algorithm Geographic Ancestry Positioning (GAP) relates local genetic distances between samples to their spatial distances, and can be used for visually discerning population structure as well as accurately inferring the spatial origin of individuals on a two-dimensional continuum. On both simulated and several real datasets from diverse human populations, GAP exhibits substantially lower error in reconstructing spatial ancestry coordinates compared to PCA. We also develop an association test that uses the ancestry coordinates inferred by GAP to accurately account for ancestry-induced correlations in GWAS. Based on simulations and analysis of a dataset of 10 metabolic traits measured in a Northern Finland cohort, which is known to exhibit significant population structure, we find that our method has superior power to current approaches. Our software is available at https://github.com/anand-bhaskar/gap . abhaskar@stanford.edu or ajavanma@usc.edu. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Inferring causal molecular networks: empirical assessment through a community-based effort.
Hill, Steven M; Heiser, Laura M; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K; Carlin, Daniel E; Zhang, Yang; Sokolov, Artem; Paull, Evan O; Wong, Chris K; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V; Favorov, Alexander V; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W; Long, Byron L; Noren, David P; Bisberg, Alexander J; Mills, Gordon B; Gray, Joe W; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A; Fertig, Elana J; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M; Spellman, Paul T; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach
2016-04-01
It remains unclear whether causal, rather than merely correlational, relationships in molecular networks can be inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge, which focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective, and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess inferred molecular networks in a causal sense.
Unlocking Proteomic Heterogeneity in Complex Diseases through Visual Analytics
Bhavnani, Suresh K.; Dang, Bryant; Bellala, Gowtham; Divekar, Rohit; Visweswaran, Shyam; Brasier, Allan; Kurosky, Alex
2015-01-01
Despite years of preclinical development, biological interventions designed to treat complex diseases like asthma often fail in phase III clinical trials. These failures suggest that current methods to analyze biomedical data might be missing critical aspects of biological complexity such as the assumption that cases and controls come from homogeneous distributions. Here we discuss why and how methods from the rapidly evolving field of visual analytics can help translational teams (consisting of biologists, clinicians, and bioinformaticians) to address the challenge of modeling and inferring heterogeneity in the proteomic and phenotypic profiles of patients with complex diseases. Because a primary goal of visual analytics is to amplify the cognitive capacities of humans for detecting patterns in complex data, we begin with an overview of the cognitive foundations for the field of visual analytics. Next, we organize the primary ways in which a specific form of visual analytics called networks have been used to model and infer biological mechanisms, which help to identify the properties of networks that are particularly useful for the discovery and analysis of proteomic heterogeneity in complex diseases. We describe one such approach called subject-protein networks, and demonstrate its application on two proteomic datasets. This demonstration provides insights to help translational teams overcome theoretical, practical, and pedagogical hurdles for the widespread use of subject-protein networks for analyzing molecular heterogeneities, with the translational goal of designing biomarker-based clinical trials, and accelerating the development of personalized approaches to medicine. PMID:25684269
Edens; McCormick
2000-10-01
This study investigates the influences of print advertisements on the affective and cognitive responses of adolescents. Junior and senior high school males (n = 111) and females (n = 84) were randomly assigned to either a low- or high-elaboration condition to process primarily visual and primarily verbal print advertisements. The students then responded to questions measuring three dependent variables-memory of specific facts, inference, and emotional response. Three-way ANOVA results indicated that predominantly visual advertisements elicited memory of more facts, more inferencing, and more intense emotional responses than predominantly verbal ads. In addition, females remembered more facts, made more inferences, reported stronger emotional responses, and detected the explicit claim of the ad more frequently than males. Finally, students in the high-elaboration condition remembered more details than students in the low-elaboration condition. The results are discussed in terms of implications for advertising media literacy. Copyright 2000 Academic Press.
Econophysical visualization of Adam Smith’s invisible hand
NASA Astrophysics Data System (ADS)
Cohen, Morrel H.; Eliazar, Iddo I.
2013-02-01
Consider a complex system whose macrostate is statistically observable, but yet whose operating mechanism is an unknown black-box. In this paper we address the problem of inferring, from the system’s macrostate statistics, the system’s intrinsic force yielding the observed statistics. The inference is established via two diametrically opposite approaches which result in the very same intrinsic force: a top-down approach based on the notion of entropy, and a bottom-up approach based on the notion of Langevin dynamics. The general results established are applied to the problem of visualizing the intrinsic socioeconomic force-Adam Smith’s invisible hand-shaping the distribution of wealth in human societies. Our analysis yields quantitative econophysical representations of figurative socioeconomic forces, quantitative definitions of “poor” and “rich”, and a quantitative characterization of the “poor-get-poorer” and the “rich-get-richer” phenomena.
Real Objects Can Impede Conditional Reasoning but Augmented Objects Do Not.
Sato, Yuri; Sugimoto, Yutaro; Ueda, Kazuhiro
2018-03-01
In this study, Knauff and Johnson-Laird's (2002) visual impedance hypothesis (i.e., mental representations with irrelevant visual detail can impede reasoning) is applied to the domain of external representations and diagrammatic reasoning. We show that the use of real objects and augmented real (AR) objects can control human interpretation and reasoning about conditionals. As participants made inferences (e.g., an invalid one from "if P then Q" to "P"), they also moved objects corresponding to premises. Participants who moved real objects made more invalid inferences than those who moved AR objects and those who did not manipulate objects (there was no significant difference between the last two groups). Our results showed that real objects impeded conditional reasoning, but AR objects did not. These findings are explained by the fact that real objects may over-specify a single state that exists, while AR objects suggest multiple possibilities. Copyright © 2017 Cognitive Science Society, Inc.
Keefe, Bruce D; Wincenciak, Joanna; Jellema, Tjeerd; Ward, James W; Barraclough, Nick E
2016-07-01
When observing another individual's actions, we can both recognize their actions and infer their beliefs concerning the physical and social environment. The extent to which visual adaptation influences action recognition and conceptually later stages of processing involved in deriving the belief state of the actor remains unknown. To explore this we used virtual reality (life-size photorealistic actors presented in stereoscopic three dimensions) to see how visual adaptation influences the perception of individuals in naturally unfolding social scenes at increasingly higher levels of action understanding. We presented scenes in which one actor picked up boxes (of varying number and weight), after which a second actor picked up a single box. Adaptation to the first actor's behavior systematically changed perception of the second actor. Aftereffects increased with the duration of the first actor's behavior, declined exponentially over time, and were independent of view direction. Inferences about the second actor's expectation of box weight were also distorted by adaptation to the first actor. Distortions in action recognition and actor expectations did not, however, extend across different actions, indicating that adaptation is not acting at an action-independent abstract level but rather at an action-dependent level. We conclude that although adaptation influences more complex inferences about belief states of individuals, this is likely to be a result of adaptation at an earlier action recognition stage rather than adaptation operating at a higher, more abstract level in mentalizing or simulation systems.
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Stan: A Probabilistic Programming Language for Bayesian Inference and Optimization
ERIC Educational Resources Information Center
Gelman, Andrew; Lee, Daniel; Guo, Jiqiang
2015-01-01
Stan is a free and open-source C++ program that performs Bayesian inference or optimization for arbitrary user-specified models and can be called from the command line, R, Python, Matlab, or Julia and has great promise for fitting large and complex statistical models in many areas of application. We discuss Stan from users' and developers'…
Geoscience in the Big Data Era: Are models obsolete?
NASA Astrophysics Data System (ADS)
Yuen, D. A.; Zheng, L.; Stark, P. B.; Morra, G.; Knepley, M.; Wang, X.
2016-12-01
In last few decades, the velocity, volume, and variety of geophysical data have increased, while the development of the Internet and distributed computing has led to the emergence of "data science." Fitting and running numerical models, especially based on PDEs, is the main consumer of flops in geoscience. Can large amounts of diverse data supplant modeling? Without the ability to conduct randomized, controlled experiments, causal inference requires understanding the physics. It is sometimes possible to predict well without understanding the system—if (1) the system is predictable, (2) data on "important" variables are available, and (3) the system changes slowly enough. And sometimes even a crude model can help the data "speak for themselves" much more clearly. For example, Shearer (1991) used a 1-dimensional velocity model to stack long-period seismograms, revealing upper mantle discontinuities. This was a "big data" approach: the main use of computing was in the data processing, rather than in modeling, yet the "signal" became clear. In contrast, modelers tend to use all available computing power to fit even more complex models, resulting in a cycle where uncertainty quantification (UQ) is never possible: even if realistic UQ required only 1,000 model evaluations, it is never in reach. To make more reliable inferences requires better data analysis and statistics, not more complex models. Geoscientists need to learn new skills and tools: sound software engineering practices; open programming languages suitable for big data; parallel and distributed computing; data visualization; and basic nonparametric, computationally based statistical inference, such as permutation tests. They should work reproducibly, scripting all analyses and avoiding point-and-click tools.
Integrated Module and Gene-Specific Regulatory Inference Implicates Upstream Signaling Networks
Roy, Sushmita; Lagree, Stephen; Hou, Zhonggang; Thomson, James A.; Stewart, Ron; Gasch, Audrey P.
2013-01-01
Regulatory networks that control gene expression are important in diverse biological contexts including stress response and development. Each gene's regulatory program is determined by module-level regulation (e.g. co-regulation via the same signaling system), as well as gene-specific determinants that can fine-tune expression. We present a novel approach, Modular regulatory network learning with per gene information (MERLIN), that infers regulatory programs for individual genes while probabilistically constraining these programs to reveal module-level organization of regulatory networks. Using edge-, regulator- and module-based comparisons of simulated networks of known ground truth, we find MERLIN reconstructs regulatory programs of individual genes as well or better than existing approaches of network reconstruction, while additionally identifying modular organization of the regulatory networks. We use MERLIN to dissect global transcriptional behavior in two biological contexts: yeast stress response and human embryonic stem cell differentiation. Regulatory modules inferred by MERLIN capture co-regulatory relationships between signaling proteins and downstream transcription factors thereby revealing the upstream signaling systems controlling transcriptional responses. The inferred networks are enriched for regulators with genetic or physical interactions, supporting the inference, and identify modules of functionally related genes bound by the same transcriptional regulators. Our method combines the strengths of per-gene and per-module methods to reveal new insights into transcriptional regulation in stress and development. PMID:24146602
Visual Programming: A Programming Tool for Increasing Mathematics Achivement
ERIC Educational Resources Information Center
Swanier, Cheryl A.; Seals, Cheryl D.; Billionniere, Elodie V.
2009-01-01
This paper aims to address the need of increasing student achievement in mathematics using a visual programming language such as Scratch. This visual programming language facilitates creating an environment where students in K-12 education can develop mathematical simulations while learning a visual programming language at the same time.…
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Zhou, Xiaofan; Shen, Xing-Xing; Hittinger, Chris Todd
2018-01-01
Abstract The sizes of the data matrices assembled to resolve branches of the tree of life have increased dramatically, motivating the development of programs for fast, yet accurate, inference. For example, several different fast programs have been developed in the very popular maximum likelihood framework, including RAxML/ExaML, PhyML, IQ-TREE, and FastTree. Although these programs are widely used, a systematic evaluation and comparison of their performance using empirical genome-scale data matrices has so far been lacking. To address this question, we evaluated these four programs on 19 empirical phylogenomic data sets with hundreds to thousands of genes and up to 200 taxa with respect to likelihood maximization, tree topology, and computational speed. For single-gene tree inference, we found that the more exhaustive and slower strategies (ten searches per alignment) outperformed faster strategies (one tree search per alignment) using RAxML, PhyML, or IQ-TREE. Interestingly, single-gene trees inferred by the three programs yielded comparable coalescent-based species tree estimations. For concatenation-based species tree inference, IQ-TREE consistently achieved the best-observed likelihoods for all data sets, and RAxML/ExaML was a close second. In contrast, PhyML often failed to complete concatenation-based analyses, whereas FastTree was the fastest but generated lower likelihood values and more dissimilar tree topologies in both types of analyses. Finally, data matrix properties, such as the number of taxa and the strength of phylogenetic signal, sometimes substantially influenced the programs’ relative performance. Our results provide real-world gene and species tree phylogenetic inference benchmarks to inform the design and execution of large-scale phylogenomic data analyses. PMID:29177474
Multinomial Bayesian learning for modeling classical and nonclassical receptive field properties.
Hosoya, Haruo
2012-08-01
We study the interplay of Bayesian inference and natural image learning in a hierarchical vision system, in relation to the response properties of early visual cortex. We particularly focus on a Bayesian network with multinomial variables that can represent discrete feature spaces similar to hypercolumns combining minicolumns, enforce sparsity of activation to learn efficient representations, and explain divisive normalization. We demonstrate that maximal-likelihood learning using sampling-based Bayesian inference gives rise to classical receptive field properties similar to V1 simple cells and V2 cells, while inference performed on the trained network yields nonclassical context-dependent response properties such as cross-orientation suppression and filling in. Comparison with known physiological properties reveals some qualitative and quantitative similarities.
Effectiveness of Program Visualization: A Case Study with the ViLLE Tool
ERIC Educational Resources Information Center
Rajala, Teemu; Laakso, Mikko-Jussi; Kaila, Erkki; Salakoski, Tapio
2008-01-01
Program visualization is one of the various methods developed over the years to aid novices with their difficulties in learning to program. It consists of different graphical--often animated--and textual objects, visualizing the execution of programs. The aim of program visualization is to enhance students' understanding of different areas of…
Inferring Phylogenetic Networks Using PhyloNet.
Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay
2018-07-01
PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.
ShinyKGode: an interactive application for ODE parameter inference using gradient matching.
Wandy, Joe; Niu, Mu; Giurghita, Diana; Daly, Rónán; Rogers, Simon; Husmeier, Dirk
2018-07-01
Mathematical modelling based on ordinary differential equations (ODEs) is widely used to describe the dynamics of biological systems, particularly in systems and pathway biology. Often the kinetic parameters of these ODE systems are unknown and have to be inferred from the data. Approximate parameter inference methods based on gradient matching (which do not require performing computationally expensive numerical integration of the ODEs) have been getting popular in recent years, but many implementations are difficult to run without expert knowledge. Here, we introduce ShinyKGode, an interactive web application to perform fast parameter inference on ODEs using gradient matching. ShinyKGode can be used to infer ODE parameters on simulated and observed data using gradient matching. Users can easily load their own models in Systems Biology Markup Language format, and a set of pre-defined ODE benchmark models are provided in the application. Inferred parameters are visualized alongside diagnostic plots to assess convergence. The R package for ShinyKGode can be installed through the Comprehensive R Archive Network (CRAN). Installation instructions, as well as tutorial videos and source code are available at https://joewandy.github.io/shinyKGode. Supplementary data are available at Bioinformatics online.
StrAuto: automation and parallelization of STRUCTURE analysis.
Chhatre, Vikram E; Emerson, Kevin J
2017-03-24
Population structure inference using the software STRUCTURE has become an integral part of population genetic studies covering a broad spectrum of taxa including humans. The ever-expanding size of genetic data sets poses computational challenges for this analysis. Although at least one tool currently implements parallel computing to reduce computational overload of this analysis, it does not fully automate the use of replicate STRUCTURE analysis runs required for downstream inference of optimal K. There is pressing need for a tool that can deploy population structure analysis on high performance computing clusters. We present an updated version of the popular Python program StrAuto, to streamline population structure analysis using parallel computing. StrAuto implements a pipeline that combines STRUCTURE analysis with the Evanno Δ K analysis and visualization of results using STRUCTURE HARVESTER. Using benchmarking tests, we demonstrate that StrAuto significantly reduces the computational time needed to perform iterative STRUCTURE analysis by distributing runs over two or more processors. StrAuto is the first tool to integrate STRUCTURE analysis with post-processing using a pipeline approach in addition to implementing parallel computation - a set up ideal for deployment on computing clusters. StrAuto is distributed under the GNU GPL (General Public License) and available to download from http://strauto.popgen.org .
A Logical Framework for Service Migration Based Survivability
2016-06-24
platforms; Service Migration Strategy Fuzzy Inference System Knowledge Base Fuzzy rules representing domain expert knowledge about implications of...service migration strategy. Our approach uses expert knowledge as linguistic reasoning rules and takes service programs damage assessment, service...programs complexity, and available network capability as input. The fuzzy inference system includes four components as shown in Figure 5: (1) a knowledge
Inference for Continuous-Time Probabilistic Programming
2017-12-01
Parzen window density estimator to jointly model the inter-camera travel time intervals, locations of exit/entrances, and velocities of ob- jects...asked to travel across the scene multiple times . Even in such a scenario they formed groups and made social interactions, which Fig. 7: Topology of...INFERENCE FOR CONTINUOUS- TIME PROBABILISTIC PROGRAMMING UNIVERSITY OF CALIFORNIA AT RIVERSIDE DECEMBER 2017 FINAL TECHNICAL REPORT APPROVED FOR
Sandoval-Castellanos, Edson; Palkopoulou, Eleftheria; Dalén, Love
2014-01-01
Inference of population demographic history has vastly improved in recent years due to a number of technological and theoretical advances including the use of ancient DNA. Approximate Bayesian computation (ABC) stands among the most promising methods due to its simple theoretical fundament and exceptional flexibility. However, limited availability of user-friendly programs that perform ABC analysis renders it difficult to implement, and hence programming skills are frequently required. In addition, there is limited availability of programs able to deal with heterochronous data. Here we present the software BaySICS: Bayesian Statistical Inference of Coalescent Simulations. BaySICS provides an integrated and user-friendly platform that performs ABC analyses by means of coalescent simulations from DNA sequence data. It estimates historical demographic population parameters and performs hypothesis testing by means of Bayes factors obtained from model comparisons. Although providing specific features that improve inference from datasets with heterochronous data, BaySICS also has several capabilities making it a suitable tool for analysing contemporary genetic datasets. Those capabilities include joint analysis of independent tables, a graphical interface and the implementation of Markov-chain Monte Carlo without likelihoods.
Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan
2018-04-11
Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.
Finding Waldo: Learning about Users from their Interactions.
Brown, Eli T; Ottley, Alvitta; Zhao, Helen; Quan Lin; Souvenir, Richard; Endert, Alex; Chang, Remco
2014-12-01
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user's interactions with a system reflect a large amount of the user's reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user's task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, we conduct an experiment in which participants perform a visual search task, and apply well-known machine learning algorithms to three encodings of the users' interaction data. We achieve, depending on algorithm and encoding, between 62% and 83% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user's personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time: in one case 95% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed-initiative visual analytics systems.
Artistic image analysis using graph-based learning approaches.
Carneiro, Gustavo
2013-08-01
We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Tucker, Deanne (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM codes for Computational Fluid Dynamics on a network of Sparcstations, including (a) NAS Parallel benchmarks CG and MG (White, Alund and Sunderam 1993); (b) a multi-partitioning algorithm for NAS Parallel Benchmark SP (Wijngaart 1993); and (c) an overset grid flowsolver (Smith 1993). These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains (a) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (b) Monitor, a library of run-time trace-collection routines; (c) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (d) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses X11R5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (a) the impact of long message latencies; (b) the impact of multiprogramming overheads and associated load imbalance; (c) cache and virtual-memory effects; and (4significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (a) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets; and (b) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam
2016-10-01
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
Inferring difficulty: Flexibility in the real-time processing of disfluency
Heller, Daphna; Arnold, Jennifer E.; Klein, Natalie M.; Tanenhaus, Michael K.
2015-01-01
Upon hearing a disfluent referring expression, listeners expect the speaker to refer to an object that is previously-unmentioned, an object that does not have a straightforward label, or an object that requires a longer description. Two visual-world eye-tracking experiments examined whether listeners directly associate disfluency with these properties of objects, or whether disfluency attribution is more flexible and involves situation-specific inferences. Since in natural situations reference to objects that do not have a straightforward label or that require a longer description is correlated with both production difficulty and with disfluency, we used a mini artificial lexicon to dissociate difficulty from these properties, building on the fact that recently-learned names take longer to produce than existing words in one’s mental lexicon. The results demonstrate that disfluency attribution involves situation-specific inferences; we propose that in new situations listeners spontaneously infer what may cause production difficulty. However, the results show that these situation-specific inferences are limited in scope: listeners assessed difficulty relative to their own experience with the artificial names, and did not adapt to the assumed knowledge of the speaker. PMID:26677642
Iconic Factors and Language Word Order
ERIC Educational Resources Information Center
Moeser, Shannon Dawn
1975-01-01
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
The Role of Working Memory in the Probabilistic Inference of Future Sensory Events.
Cashdollar, Nathan; Ruhnau, Philipp; Weisz, Nathan; Hasson, Uri
2017-05-01
The ability to represent the emerging regularity of sensory information from the external environment has been thought to allow one to probabilistically infer future sensory occurrences and thus optimize behavior. However, the underlying neural implementation of this process is still not comprehensively understood. Through a convergence of behavioral and neurophysiological evidence, we establish that the probabilistic inference of future events is critically linked to people's ability to maintain the recent past in working memory. Magnetoencephalography recordings demonstrated that when visual stimuli occurring over an extended time series had a greater statistical regularity, individuals with higher working-memory capacity (WMC) displayed enhanced slow-wave neural oscillations in the θ frequency band (4-8 Hz.) prior to, but not during stimulus appearance. This prestimulus neural activity was specifically linked to contexts where information could be anticipated and influenced the preferential sensory processing for this visual information after its appearance. A separate behavioral study demonstrated that this process intrinsically emerges during continuous perception and underpins a realistic advantage for efficient behavioral responses. In this way, WMC optimizes the anticipation of higher level semantic concepts expected to occur in the near future. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Learning Visualization Strategies: A qualitative investigation
NASA Astrophysics Data System (ADS)
Halpern, Daniel; Oh, Kyong Eun; Tremaine, Marilyn; Chiang, James; Bemis, Karen; Silver, Deborah
2015-12-01
The following study investigates the range of strategies individuals develop to infer and interpret cross-sections of three-dimensional objects. We focus on the identification of mental representations and problem-solving processes made by 11 individuals with the goal of building training applications that integrate the strategies developed by the participants in our study. Our results suggest that although spatial transformation and perspective-taking techniques are useful for visualizing cross-section problems, these visual processes are augmented by analytical thinking. Further, our study shows that participants employ general analytic strategies for extended periods which evolve through practice into a set of progressively more expert strategies. Theoretical implications are discussed and five main findings are recommended for integration into the design of education software that facilitates visual learning and comprehension.
Chen, Yi-Nan; Lin, Chin-Kai; Wei, Ta-Sen; Liu, Chi-Hsin; Wuang, Yee-Pay
2013-12-01
This study compared the effectiveness of three approaches to improving visual perception among preschool children 4-6 years old with developmental delays: multimedia visual perceptual group training, multimedia visual perceptual individual training, and paper visual perceptual group training. A control group received no special training. This study employed a pretest-posttest control group of true experimental design. A total of 64 children 4-6 years old with developmental delays were randomized into four groups: (1) multimedia visual perceptual group training (15 subjects); (2) multimedia visual perceptual individual training group (15 subjects); paper visual perceptual group training (19 subjects); and (4) a control group (15 subjects) with no visual perceptual training. Forty minute training sessions were conducted once a week for 14 weeks. The Test of Visual Perception Skills, third edition, was used to evaluate the effectiveness of the intervention. Paired-samples t-test showed significant differences pre- and post-test among the three groups, but no significant difference was found between the pre-test and post-test scores among the control group. ANOVA results showed significant differences in improvement levels among the four study groups. Scheffe post hoc test results showed significant differences between: group 1 and group 2; group 1 and group 3; group 1 and the control group; and group 2 and the control group. No significant differences were reported between group 2 and group 3, and group 3 and the control group. The results showed all three therapeutic programs produced significant differences between pretest and posttest scores. The training effect on the multimedia visual perceptual group program and the individual program was greater than the developmental effect Both the multimedia visual perceptual group training program and the multimedia visual perceptual individual training program produced significant effects on visual perception. The multimedia visual perceptual group training program was more effective for improving visual perception than was multimedia visual perceptual individual training program. The multimedia visual perceptual group training program was more effective than was the paper visual perceptual group training program. Copyright © 2013 Elsevier Ltd. All rights reserved.
Classification-based reasoning
NASA Technical Reports Server (NTRS)
Gomez, Fernando; Segami, Carlos
1991-01-01
A representation formalism for N-ary relations, quantification, and definition of concepts is described. Three types of conditions are associated with the concepts: (1) necessary and sufficient properties, (2) contingent properties, and (3) necessary properties. Also explained is how complex chains of inferences can be accomplished by representing existentially quantified sentences, and concepts denoted by restrictive relative clauses as classification hierarchies. The representation structures that make possible the inferences are explained first, followed by the reasoning algorithms that draw the inferences from the knowledge structures. All the ideas explained have been implemented and are part of the information retrieval component of a program called Snowy. An appendix contains a brief session with the program.
32 CFR 813.1 - Purpose of the visual information documentation (VIDOC) program.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Purpose of the visual information documentation (VIDOC) program. 813.1 Section 813.1 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.1 Purpose of the visual information documentation (VIDOC) program....
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
Exploiting visual search theory to infer social interactions
NASA Astrophysics Data System (ADS)
Rota, Paolo; Dang-Nguyen, Duc-Tien; Conci, Nicola; Sebe, Nicu
2013-03-01
In this paper we propose a new method to infer human social interactions using typical techniques adopted in literature for visual search and information retrieval. The main piece of information we use to discriminate among different types of interactions is provided by proxemics cues acquired by a tracker, and used to distinguish between intentional and casual interactions. The proxemics information has been acquired through the analysis of two different metrics: on the one hand we observe the current distance between subjects, and on the other hand we measure the O-space synergy between subjects. The obtained values are taken at every time step over a temporal sliding window, and processed in the Discrete Fourier Transform (DFT) domain. The features are eventually merged into an unique array, and clustered using the K-means algorithm. The clusters are reorganized using a second larger temporal window into a Bag Of Words framework, so as to build the feature vector that will feed the SVM classifier.
Learning what to expect (in visual perception)
Seriès, Peggy; Seitz, Aaron R.
2013-01-01
Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is “Bayes-optimal” under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual’s priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors. PMID:24187536
Leveraging scientific credibility about Arctic sea ice trends in a polarized political environment.
Jamieson, Kathleen Hall; Hardy, Bruce W
2014-09-16
This work argues that, in a polarized environment, scientists can minimize the likelihood that the audience's biased processing will lead to rejection of their message if they not only eschew advocacy but also, convey that they are sharers of knowledge faithful to science's way of knowing and respectful of the audience's intelligence; the sources on which they rely are well-regarded by both conservatives and liberals; and the message explains how the scientist arrived at the offered conclusion, is conveyed in a visual form that involves the audience in drawing its own conclusions, and capsulizes key inferences in an illustrative analogy. A pilot experiment raises the possibility that such a leveraging-involving-visualizing-analogizing message structure can increase acceptance of the scientific claims about the downward cross-decade trend in Arctic sea ice extent and elicit inferences consistent with the scientific consensus on climate change among conservatives exposed to misleadingly selective data in a partisan news source.
Creating visual explanations improves learning.
Bobek, Eliza; Tversky, Barbara
2016-01-01
Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.
2011-01-01
Background Albeit exercise is currently advocated as one of the most effective management strategies for fibromyalgia syndrome (FMS); the implementation of exercise as a FMS treatment in reality is significantly hampered by patients' poor compliance. The inference that pain catastrophizing is a key predictor of poor compliance in FMS patients, justifies considering the alteration of pain catastrophizing in improving compliance towards exercises in FMS patients. The aim of this study is to provide proof-of-concept for the development and testing of a novel virtual reality exposure therapy (VRET) program as treatment for exercise-related pain catastrophizing in FMS patients. Methods Two interlinked experimental studies will be conducted. Study 1 aims to objectively ascertain if neurophysiological changes occur in the functional brain areas associated with pain catastrophizing, when catastrophizing FMS subjects are exposed to visuals of exercise activities. Study 2 aims to ascertain the preliminary efficacy and feasibility of exposure to visuals of exercise activities as a treatment for exercise-related pain catastrophizing in FMS subjects. Twenty subjects will be selected from a group of FMS patients attending the Tygerberg Hospital in Cape Town, South Africa and randomly allocated to either the VRET (intervention) group or waiting list (control) group. Baseline neurophysiological activity for subjects will be collected in study 1 using functional magnetic resonance imaging (fMRI). In study 2, clinical improvement in pain catastrophizing will be measured using fMRI (objective) and the pain catastrophizing scale (subjective). Discussion The premise is if exposing FMS patients to visuals of various exercise activities trigger the functional brain areas associated with pain catastrophizing; then as a treatment, repeated exposure to visuals of the exercise activities using a VRET program could possibly decrease exercise-related pain catastrophizing in FMS patients. Proof-of-concept will either be established or negated. The results of this project are envisaged to revolutionize FMS and pain catastrophizing research and in the future, assist health professionals and FMS patients in reducing despondency regarding FMS management. Trial registration PACTR201011000264179 PMID:21529375
Gao, Dashan; Vasconcelos, Nuno
2009-01-01
A decision-theoretic formulation of visual saliency, first proposed for top-down processing (object recognition) (Gao & Vasconcelos, 2005a), is extended to the problem of bottom-up saliency. Under this formulation, optimality is defined in the minimum probability of error sense, under a constraint of computational parsimony. The saliency of the visual features at a given location of the visual field is defined as the power of those features to discriminate between the stimulus at the location and a null hypothesis. For bottom-up saliency, this is the set of visual features that surround the location under consideration. Discrimination is defined in an information-theoretic sense and the optimal saliency detector derived for a class of stimuli that complies with known statistical properties of natural images. It is shown that under the assumption that saliency is driven by linear filtering, the optimal detector consists of what is usually referred to as the standard architecture of V1: a cascade of linear filtering, divisive normalization, rectification, and spatial pooling. The optimal detector is also shown to replicate the fundamental properties of the psychophysics of saliency: stimulus pop-out, saliency asymmetries for stimulus presence versus absence, disregard of feature conjunctions, and Weber's law. Finally, it is shown that the optimal saliency architecture can be applied to the solution of generic inference problems. In particular, for the class of stimuli studied, it performs the three fundamental operations of statistical inference: assessment of probabilities, implementation of Bayes decision rule, and feature selection.
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P.; Elman, Jeffrey L.
2016-01-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event, or semantically anomalous but unrelated to the described event. For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, event-related anomalous words elicited a reduced N400 relative to event-unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation between event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. PMID:26878980
Finding Waldo: Learning about Users from their Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Eli T.; Ottley, Alvitta; Zhao, Helen
Visual analytics is inherently a collaboration between human and computer. However, in current visual analytics systems, the computer has limited means of knowing about its users and their analysis processes. While existing research has shown that a user’s interactions with a system reflect a large amount of the user’s reasoning process, there has been limited advancement in developing automated, real-time techniques that mine interactions to learn about the user. In this paper, we demonstrate that we can accurately predict a user’s task performance and infer some user personality traits by using machine learning techniques to analyze interaction data. Specifically, wemore » conduct an experiment in which participants perform a visual search task and we apply well-known machine learning algorithms to three encodings of the users interaction data. We achieve, depending on algorithm and encoding, between 62% and 96% accuracy at predicting whether each user will be fast or slow at completing the task. Beyond predicting performance, we demonstrate that using the same techniques, we can infer aspects of the user’s personality factors, including locus of control, extraversion, and neuroticism. Further analyses show that strong results can be attained with limited observation time, in some cases, 82% of the final accuracy is gained after a quarter of the average task completion time. Overall, our findings show that interactions can provide information to the computer about its human collaborator, and establish a foundation for realizing mixed- initiative visual analytics systems.« less
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P; Elman, Jeffrey L
2016-04-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event (Event-Related), or semantically anomalous but unrelated to the described event (Event-Unrelated). For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, Event-Related anomalous words elicited a reduced N400 relative to Event-Unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation of event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ferguson, Heather J; Breheny, Richard
2011-05-01
The time-course of representing others' perspectives is inconclusive across the currently available models of ToM processing. We report two visual-world studies investigating how knowledge about a character's basic preferences (e.g. Tom's favourite colour is pink) and higher-order desires (his wish to keep this preference secret) compete to influence online expectations about subsequent behaviour. Participants' eye movements around a visual scene were tracked while they listened to auditory narratives. While clear differences in anticipatory visual biases emerged between conditions in Experiment 1, post-hoc analyses testing the strength of the relevant biases suggested a discrepancy in the time-course of predicting appropriate referents within the different contexts. Specifically, predictions to the target emerged very early when there was no conflict between the character's basic preferences and higher-order desires, but appeared to be relatively delayed when comprehenders were provided with conflicting information about that character's desire to keep a secret. However, a second experiment demonstrated that this apparent 'cognitive cost' in inferring behaviour based on higher-order desires was in fact driven by low-level features between the context sentence and visual scene. Taken together, these results suggest that healthy adults are able to make complex higher-order ToM inferences without the need to call on costly cognitive processes. Results are discussed relative to previous accounts of ToM and language processing. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Attanayake, J.; Ghosh, A.; Amosu, A.
2010-12-01
Students of this generation are markedly different from their predecessors because they grow up and learn in a world of visual technology populated by touch screens and smart boards. Recent studies have found that the attention span of university students whose medium of instruction is traditional teaching methods is roughly fifteen minutes and that there is a significant drop in the number of students paying attention over time in a lecture. On the other hand, when carefully segmented and learner-paced, animated visualizations can enhance the learning experience. Therefore, the instructors are faced with the difficult task of designing more complex teaching environments to improve learner productivity. We have developed an animated visualization of earthquake wave propagation across a generic transect of the Transportable Array of the USArray from a magnitude 6.9 event that occurred in the Gulf of California on August 3rd 2009. Despite the fact that the proto-type tool is built in MATLAB - one of the most popular programming environments among the seismology community, the movies can be run as a standalone stream with any built-in media player that supports .avi file format. We infer continuous ground motion along the transect through a projection and interpolation mechanism based on data from stations within 100 km of the transect. In the movies we identify the arrival of surface waves that have high amplitudes. However, over time, although typical Rayleigh type ground motion can be observed, the motion at any given point becomes complex owing to interference of different wave types and different seismic properties of the subsurface. This clearly is different from simple representations of seismic wave propagation in most introductory textbooks. Further, we find a noisy station that shows unusually high amplitude. We refrain from deleting this station in order to demonstrate that in a real world experiment, generally, there will be complexities arising from unexpected behavior of instruments and/or the system under investigation. Explaining such behavior and exploring ways to minimize biases arising from it is an important lesson to learn in introductory science classes. This program can generate visualizations of ground motion from events in the Gulf of California in near real time and with little further development, from events elsewhere.
Visualization of the Mode Shapes of Pressure Oscillation in a Cylindrical Cavity
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xin; Qi, Yunliang; Wang, Zhi
Our work describes a novel experimental method to visualize the mode shapes of pressure oscillation in a cylindrical cavity. Acoustic resonance in a cavity is a grand old problem that has been under investigation (using both analytical and numerical methods) for more than a century. In this article, a novel method based on high speed imaging of combustion chemiluminescence was presented to visualize the mode shapes of pressure oscillation in a cylindrical cavity. By generating high-temperature combustion gases and strong pressure waves simultaneously in a cylindrical cavity, the pressure oscillation can be inferred due to the chemiluminescence emissions of themore » combustion products. We can then visualized the mode shapes by reconstructing the images based on the amplitudes of the luminosity spectrum at the corresponding resonant frequencies. Up to 11 resonant mode shapes were clearly visualized, each matching very well with the analytical solutions.« less
Cocco, Simona; Leibler, Stanislas; Monasson, Rémi
2009-01-01
Complexity of neural systems often makes impracticable explicit measurements of all interactions between their constituents. Inverse statistical physics approaches, which infer effective couplings between neurons from their spiking activity, have been so far hindered by their computational complexity. Here, we present 2 complementary, computationally efficient inverse algorithms based on the Ising and “leaky integrate-and-fire” models. We apply those algorithms to reanalyze multielectrode recordings in the salamander retina in darkness and under random visual stimulus. We find strong positive couplings between nearby ganglion cells common to both stimuli, whereas long-range couplings appear under random stimulus only. The uncertainty on the inferred couplings due to limitations in the recordings (duration, small area covered on the retina) is discussed. Our methods will allow real-time evaluation of couplings for large assemblies of neurons. PMID:19666487
ERIC Educational Resources Information Center
GROPPER, GEORGE L.
THIS IS A REPORT OF TWO STUDIES IN WHICH PRINCIPLES OF PROGRAMED INSTRUCTION WERE ADAPTED FOR VISUAL PRESENTATIONS. SCIENTIFIC DEMONSTRATIONS WERE PREPARED WITH A VISUAL PROGRAM AND A VERBAL PROGRAM ON--(1) ARCHIMEDES' LAW AND (2) FORCE AND PRESSURE. RESULTS SUGGESTED THAT RESPONSES ARE MORE READILY BROUGHT UNDER THE CONTROL OF VISUAL PRESENTATION…
Visual Exploratory Search of Relationship Graphs on Smartphones
Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming
2013-01-01
This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936
Predictive Coding in Area V4: Dynamic Shape Discrimination under Partial Occlusion
Choi, Hannah; Pasupathy, Anitha; Shea-Brown, Eric
2018-01-01
The primate visual system has an exquisite ability to discriminate partially occluded shapes. Recent electrophysiological recordings suggest that response dynamics in intermediate visual cortical area V4, shaped by feedback from prefrontal cortex (PFC), may play a key role. To probe the algorithms that may underlie these findings, we build and test a model of V4 and PFC interactions based on a hierarchical predictive coding framework. We propose that probabilistic inference occurs in two steps. Initially, V4 responses are driven solely by bottom-up sensory input and are thus strongly influenced by the level of occlusion. After a delay, V4 responses combine both feedforward input and feedback signals from the PFC; the latter reflect predictions made by PFC about the visual stimulus underlying V4 activity. We find that this model captures key features of V4 and PFC dynamics observed in experiments. Specifically, PFC responses are strongest for occluded stimuli and delayed responses in V4 are less sensitive to occlusion, supporting our hypothesis that the feedback signals from PFC underlie robust discrimination of occluded shapes. Thus, our study proposes that area V4 and PFC participate in hierarchical inference, with feedback signals encoding top-down predictions about occluded shapes. PMID:29566355
A Methodology for Evaluating the Fidelity of Ground-Based Flight Simulators
NASA Technical Reports Server (NTRS)
Zeyada, Y.; Hess, R. A.
1999-01-01
An analytical and experimental investigation was undertaken to model the manner in which pilots perceive and utilize visual, proprioceptive, and vestibular cues in a ground-based flight simulator. The study was part of a larger research effort which has the creation of a methodology for determining flight simulator fidelity requirements as its ultimate goal. The study utilized a closed-loop feedback structure of the pilot/simulator system which included the pilot, the cockpit inceptor, the dynamics of the simulated vehicle and the motion system. With the exception of time delays which accrued in visual scene production in the simulator, visual scene effects were not included in this study. The NASA Ames Vertical Motion Simulator was used in a simple, single-degree of freedom rotorcraft bob-up/down maneuver. Pilot/vehicle analysis and fuzzy-inference identification were employed to study the changes in fidelity which occurred as the characteristics of the motion system were varied over five configurations i The data from three of the five pilots that participated in the experimental study were analyzed in the fuzzy inference identification. Results indicate that both the analytical pilot/vehicle analysis and the fuzzyinference identification can be used to reflect changes in simulator fidelity for the task examined.
Improving Reading Comprehension through Higher-Order Thinking Skills
ERIC Educational Resources Information Center
McKown, Brigitte A.; Barnett, Cynthia L.
2007-01-01
This action research project report documents the action research project that was conducted to improve reading comprehension with second grade and third grade students. The teacher researchers intended to improve reading comprehension by using higher-order thinking skills such as predicting, making connections, visualizing, inferring,…
Through-wall image enhancement using fuzzy and QR decomposition.
Riaz, Muhammad Mohsin; Ghafoor, Abdul
2014-01-01
QR decomposition and fuzzy logic based scheme is proposed for through-wall image enhancement. QR decomposition is less complex compared to singular value decomposition. Fuzzy inference engine assigns weights to different overlapping subspaces. Quantitative measures and visual inspection are used to analyze existing and proposed techniques.
Visual Pigments, Ocular Filters and the Evolution of Snake Vision.
Simões, Bruno F; Sampaio, Filipa L; Douglas, Ronald H; Kodandaramaiah, Ullasa; Casewell, Nicholas R; Harrison, Robert A; Hart, Nathan S; Partridge, Julian C; Hunt, David M; Gower, David J
2016-10-01
Much of what is known about the molecular evolution of vertebrate vision comes from studies of mammals, birds and fish. Reptiles (especially snakes) have barely been sampled in previous studies despite their exceptional diversity of retinal photoreceptor complements. Here, we analyze opsin gene sequences and ocular media transmission for up to 69 species to investigate snake visual evolution. Most snakes express three visual opsin genes (rh1, sws1, and lws). These opsin genes (especially rh1 and sws1) have undergone much evolutionary change, including modifications of amino acid residues at sites of known importance for spectral tuning, with several tuning site combinations unknown elsewhere among vertebrates. These changes are particularly common among dipsadine and colubrine "higher" snakes. All three opsin genes are inferred to be under purifying selection, though dN/dS varies with respect to some lineages, ecologies, and retinal anatomy. Positive selection was inferred at multiple sites in all three opsins, these being concentrated in transmembrane domains and thus likely to have a substantial effect on spectral tuning and other aspects of opsin function. Snake lenses vary substantially in their spectral transmission. Snakes active at night and some of those active by day have very transmissive lenses, whereas some primarily diurnal species cut out shorter wavelengths (including UVA). In terms of retinal anatomy, lens transmission, visual pigment spectral tuning and opsin gene evolution the visual system of snakes is exceptionally diverse compared with all other extant tetrapod orders. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Example of a Bayes network of relations among visual features
NASA Astrophysics Data System (ADS)
Agosta, John M.
1991-10-01
Bayes probability networks, also termed `influence diagrams,' promise to be a versatile, rigorous, and expressive uncertainty reasoning tool. This paper presents an example of how a Bayes network can express constraints among visual hypotheses. An example is presented of a model composed of cylindric primitives, inferred from a line drawing of a plumbing fixture. Conflict between interpretations of candidate cylinders is expressed by two parameters, one for the presence and one for the absence of visual evidence of their intersection. It is shown how `partial exclusion' relations are so generated and how they determine the degree of competition among the set of hypotheses. Solving this network obtains the assemblies of cylinders most likely to form an object.
Challinor, Kirsten L; Mond, Jonathan; Stephen, Ian D; Mitchison, Deborah; Stevenson, Richard J; Hay, Phillipa; Brooks, Kevin R
2017-12-01
Although body size and shape misperception (BSSM) is a common feature of anorexia nervosa, bulimia nervosa and muscle dysmorphia, little is known about its underlying neural mechanisms. Recently, a new approach has emerged, based on the long-established non-invasive technique of perceptual adaptation, which allows for inferences about the structure of the neural apparatus responsible for alterations in visual appearance. Here, we describe several recent experimental examples of BSSM, wherein exposure to "extreme" body stimuli causes visual aftereffects of biased perception. The implications of these studies for our understanding of the neural and cognitive representation of human bodies, along with their implications for clinical practice are discussed.
Mamykina, Lena; Heitkemper, Elizabeth M.; Smaldone, Arlene M.; Kukafka, Rita; Cole-Lewis, Heather J.; Davidson, Patricia G.; Mynatt, Elizabeth D.; Cassells, Andrea; Tobin, Jonathan N.; Hripcsak, George
2017-01-01
Objective To outline new design directions for informatics solutions that facilitate personal discovery with self-monitoring data. We investigate this question in the context of chronic disease self-management with the focus on type 2 diabetes. Materials and methods We conducted an observational qualitative study of discovery with personal data among adults attending a diabetes self-management education (DSME) program that utilized a discovery-based curriculum. The study included observations of class sessions, and interviews and focus groups with the educator and attendees of the program (n = 14). Results The main discovery in diabetes self-management evolved around discovering patterns of association between characteristics of individuals’ activities and changes in their blood glucose levels that the participants referred to as “cause and effect”. This discovery empowered individuals to actively engage in self-management and provided a desired flexibility in selection of personalized self-management strategies. We show that discovery of cause and effect involves four essential phases: (1) feature selection, (2) hypothesis generation, (3) feature evaluation, and (4) goal specification. Further, we identify opportunities to support discovery at each stage with informatics and data visualization solutions by providing assistance with: (1) active manipulation of collected data (e.g., grouping, filtering and side-by-side inspection), (2) hypotheses formulation (e.g., using natural language statements or constructing visual queries), (3) inference evaluation (e.g., through aggregation and visual comparison, and statistical analysis of associations), and (4) translation of discoveries into actionable goals (e.g., tailored selection from computable knowledge sources of effective diabetes self-management behaviors). Discussion The study suggests that discovery of cause and effect in diabetes can be a powerful approach to helping individuals to improve their self-management strategies, and that self-monitoring data can serve as a driving engine for personal discovery that may lead to sustainable behavior changes. Conclusions Enabling personal discovery is a promising new approach to enhancing chronic disease self-management with informatics interventions. PMID:28974460
Mamykina, Lena; Heitkemper, Elizabeth M; Smaldone, Arlene M; Kukafka, Rita; Cole-Lewis, Heather J; Davidson, Patricia G; Mynatt, Elizabeth D; Cassells, Andrea; Tobin, Jonathan N; Hripcsak, George
2017-12-01
To outline new design directions for informatics solutions that facilitate personal discovery with self-monitoring data. We investigate this question in the context of chronic disease self-management with the focus on type 2 diabetes. We conducted an observational qualitative study of discovery with personal data among adults attending a diabetes self-management education (DSME) program that utilized a discovery-based curriculum. The study included observations of class sessions, and interviews and focus groups with the educator and attendees of the program (n = 14). The main discovery in diabetes self-management evolved around discovering patterns of association between characteristics of individuals' activities and changes in their blood glucose levels that the participants referred to as "cause and effect". This discovery empowered individuals to actively engage in self-management and provided a desired flexibility in selection of personalized self-management strategies. We show that discovery of cause and effect involves four essential phases: (1) feature selection, (2) hypothesis generation, (3) feature evaluation, and (4) goal specification. Further, we identify opportunities to support discovery at each stage with informatics and data visualization solutions by providing assistance with: (1) active manipulation of collected data (e.g., grouping, filtering and side-by-side inspection), (2) hypotheses formulation (e.g., using natural language statements or constructing visual queries), (3) inference evaluation (e.g., through aggregation and visual comparison, and statistical analysis of associations), and (4) translation of discoveries into actionable goals (e.g., tailored selection from computable knowledge sources of effective diabetes self-management behaviors). The study suggests that discovery of cause and effect in diabetes can be a powerful approach to helping individuals to improve their self-management strategies, and that self-monitoring data can serve as a driving engine for personal discovery that may lead to sustainable behavior changes. Enabling personal discovery is a promising new approach to enhancing chronic disease self-management with informatics interventions. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, Bryan J.
Current research suggests that many students do not know how to program very well at the conclusion of their introductory programming course. We believe that a reason novices have such difficulties learning programming is because engineering novices often learn through a lecture format where someone with programming knowledge lectures to novices, the novices attempt to absorb the content, and then reproduce it during exams. By primarily appealing to programming novices who prefer to understand visually, we research whether programming novices understand programming better if computer science concepts are presented using a visual programming language than if these programs are presented using a text-based programming language. This method builds upon previous research that suggests that most engineering students are visual learners, and we propose that using a flow-based visual programming language will address some of the most important and difficult topics to novices of programming. We use an existing flow-model tool, RAPTOR, to test this method, and share the program understanding results using this theory.
Learning visual balance from large-scale datasets of aesthetically highly rated images
NASA Astrophysics Data System (ADS)
Jahanian, Ali; Vishwanathan, S. V. N.; Allebach, Jan P.
2015-03-01
The concept of visual balance is innate for humans, and influences how we perceive visual aesthetics and cognize harmony. Although visual balance is a vital principle of design and taught in schools of designs, it is barely quantified. On the other hand, with emergence of automantic/semi-automatic visual designs for self-publishing, learning visual balance and computationally modeling it, may escalate aesthetics of such designs. In this paper, we present how questing for understanding visual balance inspired us to revisit one of the well-known theories in visual arts, the so called theory of "visual rightness", elucidated by Arnheim. We define Arnheim's hypothesis as a design mining problem with the goal of learning visual balance from work of professionals. We collected a dataset of 120K images that are aesthetically highly rated, from a professional photography website. We then computed factors that contribute to visual balance based on the notion of visual saliency. We fitted a mixture of Gaussians to the saliency maps of the images, and obtained the hotspots of the images. Our inferred Gaussians align with Arnheim's hotspots, and confirm his theory. Moreover, the results support the viability of the center of mass, symmetry, as well as the Rule of Thirds in our dataset.
Thin-Slice Perception Develops Slowly
ERIC Educational Resources Information Center
Balas, Benjamin; Kanwisher, Nancy; Saxe, Rebecca
2012-01-01
Body language and facial gesture provide sufficient visual information to support high-level social inferences from "thin slices" of behavior. Given short movies of nonverbal behavior, adults make reliable judgments in a large number of tasks. Here we find that the high precision of adults' nonverbal social perception depends on the slow…
Learning Visualization Strategies: A Qualitative Investigation
ERIC Educational Resources Information Center
Halpern, Daniel; Oh, Kyong Eun; Tremaine, Marilyn; Chiang, James; Bemis, Karen; Silver, Deborah
2015-01-01
The following study investigates the range of strategies individuals develop to infer and interpret cross-sections of three-dimensional objects. We focus on the identification of mental representations and problem-solving processes made by 11 individuals with the goal of building training applications that integrate the strategies developed by the…
Children Reason about Shared Preferences
ERIC Educational Resources Information Center
Fawcett, Christine A.; Markson, Lori
2010-01-01
Two-year-old children's reasoning about the relation between their own and others' preferences was investigated across two studies. In Experiment 1, children first observed 2 actors display their individual preferences for various toys. Children were then asked to make inferences about new, visually inaccessible toys and books that were described…
A linear programming model for protein inference problem in shotgun proteomics.
Huang, Ting; He, Zengyou
2012-11-15
Assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is an important issue in shotgun proteomics. The objective of protein inference is to find a subset of proteins that are truly present in the sample. Although many methods have been proposed for protein inference, several issues such as peptide degeneracy still remain unsolved. In this article, we present a linear programming model for protein inference. In this model, we use a transformation of the joint probability that each peptide/protein pair is present in the sample as the variable. Then, both the peptide probability and protein probability can be expressed as a formula in terms of the linear combination of these variables. Based on this simple fact, the protein inference problem is formulated as an optimization problem: minimize the number of proteins with non-zero probabilities under the constraint that the difference between the calculated peptide probability and the peptide probability generated from peptide identification algorithms should be less than some threshold. This model addresses the peptide degeneracy issue by forcing some joint probability variables involving degenerate peptides to be zero in a rigorous manner. The corresponding inference algorithm is named as ProteinLP. We test the performance of ProteinLP on six datasets. Experimental results show that our method is competitive with the state-of-the-art protein inference algorithms. The source code of our algorithm is available at: https://sourceforge.net/projects/prolp/. zyhe@dlut.edu.cn. Supplementary data are available at Bioinformatics Online.
2017-01-01
Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike’s information criterion (AICc) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis, the branchiopod water flea, Daphnia magna, normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus, which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei. The modeling approach presented here will be useful in selecting the most likely alternative hypotheses of opsin-based spectral photoreceptor classes, using relative opsin expression and extracellular electroretinography. PMID:28740757
Lessios, Nicolas
2017-01-01
Understanding how individual photoreceptor cells factor in the spectral sensitivity of a visual system is essential to explain how they contribute to the visual ecology of the animal in question. Existing methods that model the absorption of visual pigments use templates which correspond closely to data from thin cross-sections of photoreceptor cells. However, few modeling approaches use a single framework to incorporate physical parameters of real photoreceptors, which can be fused, and can form vertical tiers. Akaike's information criterion (AIC c ) was used here to select absorptance models of multiple classes of photoreceptor cells that maximize information, given visual system spectral sensitivity data obtained using extracellular electroretinograms and structural parameters obtained by histological methods. This framework was first used to select among alternative hypotheses of photoreceptor number. It identified spectral classes from a range of dark-adapted visual systems which have between one and four spectral photoreceptor classes. These were the velvet worm, Principapillatus hitoyensis , the branchiopod water flea, Daphnia magna , normal humans, and humans with enhanced S-cone syndrome, a condition in which S-cone frequency is increased due to mutations in a transcription factor that controls photoreceptor expression. Data from the Asian swallowtail, Papilio xuthus , which has at least five main spectral photoreceptor classes in its compound eyes, were included to illustrate potential effects of model over-simplification on multi-model inference. The multi-model framework was then used with parameters of spectral photoreceptor classes and the structural photoreceptor array kept constant. The goal was to map relative opsin expression to visual pigment concentration. It identified relative opsin expression differences for two populations of the bluefin killifish, Lucania goodei . The modeling approach presented here will be useful in selecting the most likely alternative hypotheses of opsin-based spectral photoreceptor classes, using relative opsin expression and extracellular electroretinography.
Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception
Rohe, Tim; Noppeney, Uta
2015-01-01
To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the “causal inference problem.” Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI), and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation). At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion). Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world. PMID:25710328
Post-decision biases reveal a self-consistency principle in perceptual inference.
Luu, Long; Stocker, Alan A
2018-05-15
Making a categorical judgment can systematically bias our subsequent perception of the world. We show that these biases are well explained by a self-consistent Bayesian observer whose perceptual inference process is causally conditioned on the preceding choice. We quantitatively validated the model and its key assumptions with a targeted set of three psychophysical experiments, focusing on a task sequence where subjects first had to make a categorical orientation judgment before estimating the actual orientation of a visual stimulus. Subjects exhibited a high degree of consistency between categorical judgment and estimate, which is difficult to reconcile with alternative models in the face of late, memory related noise. The observed bias patterns resemble the well-known changes in subjective preferences associated with cognitive dissonance, which suggests that the brain's inference processes may be governed by a universal self-consistency constraint that avoids entertaining 'dissonant' interpretations of the evidence. © 2018, Luu et al.
Can circular inference relate the neuropathological and behavioral aspects of schizophrenia?
Leptourgos, Pantelis; Denève, Sophie; Jardri, Renaud
2017-10-01
Schizophrenia is a complex and heterogeneous mental disorder, and researchers have only recently begun to understand its neuropathology. However, since the time of Kraepelin and Bleuler, much information has been accumulated regarding the behavioral abnormalities usually encountered in patients suffering from schizophrenia. Despite recent progress, how the latter are caused by the former is still debated. Here, we argue that circular inference, a computational framework proposed as a potential explanation for various schizophrenia symptoms, could help end this debate. Based on Marr's three levels of analysis, we discuss how impairments in local and more global neural circuits could generate aberrant beliefs, with far-ranging consequences from probabilistic decision making to high-level visual perception in conditions of ambiguity. Interestingly, the circular inference framework appears to be compatible with a variety of pathophysiological theories of schizophrenia while simulating the behavioral symptoms. Copyright © 2017 Elsevier Ltd. All rights reserved.
Using Stan for Item Response Theory Models
ERIC Educational Resources Information Center
Ames, Allison J.; Au, Chi Hang
2018-01-01
Stan is a flexible probabilistic programming language providing full Bayesian inference through Hamiltonian Monte Carlo algorithms. The benefits of Hamiltonian Monte Carlo include improved efficiency and faster inference, when compared to other MCMC software implementations. Users can interface with Stan through a variety of computing…
Efficacy of a perceptual and visual-motor skill intervention program for students with dyslexia.
Fusco, Natália; Germano, Giseli Donadon; Capellini, Simone Aparecida
2015-01-01
To verify the efficacy of a perceptual and visual-motor skill intervention program for students with dyslexia. The participants were 20 students from third to fifth grade of a public elementary school in Marília, São Paulo, aged from 8 years to 11 years and 11 months, distributed into the following groups: Group I (GI; 10 students with developmental dyslexia) and Group II (GII; 10 students with good academic performance). A perceptual and visual-motor intervention program was applied, which comprised exercises for visual-motor coordination, visual discrimination, visual memory, visual-spatial relationship, shape constancy, sequential memory, visual figure-ground coordination, and visual closure. In pre- and post-testing situations, both groups were submitted to the Test of Visual-Perceptual Skills (TVPS-3), and the quality of handwriting was analyzed using the Dysgraphia Scale. The analyzed statistical results showed that both groups of students had dysgraphia in pretesting situation. In visual perceptual skills, GI presented a lower performance compared to GII, as well as in the quality of writing. After undergoing the intervention program, GI increased the average of correct answers in TVPS-3 and improved the quality of handwriting. The developed intervention program proved appropriate for being applied to students with dyslexia, and showed positive effects because it provided improved visual perception skills and quality of writing for students with developmental dyslexia.
Logic programming and metadata specifications
NASA Technical Reports Server (NTRS)
Lopez, Antonio M., Jr.; Saacks, Marguerite E.
1992-01-01
Artificial intelligence (AI) ideas and techniques are critical to the development of intelligent information systems that will be used to collect, manipulate, and retrieve the vast amounts of space data produced by 'Missions to Planet Earth.' Natural language processing, inference, and expert systems are at the core of this space application of AI. This paper presents logic programming as an AI tool that can support inference (the ability to draw conclusions from a set of complicated and interrelated facts). It reports on the use of logic programming in the study of metadata specifications for a small problem domain of airborne sensors, and the dataset characteristics and pointers that are needed for data access.
Deductive Evaluation: Formal Code Analysis With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben. L
2016-01-01
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.
76 FR 27898 - Registration and Recordation Program
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
... to reflect a reorganization that has moved the Recordation function from the Visual Arts and... function from the Visual Arts and Recordation Division of the Registration and Recordation Program to the... Visual Arts Division of the Registration and Recordation Program, has been renamed the Recordation...
The Sense of Confidence during Probabilistic Learning: A Normative Account.
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-06-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable "feeling of knowing" or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process.
The Sense of Confidence during Probabilistic Learning: A Normative Account
Meyniel, Florent; Schlunegger, Daniel; Dehaene, Stanislas
2015-01-01
Learning in a stochastic environment consists of estimating a model from a limited amount of noisy data, and is therefore inherently uncertain. However, many classical models reduce the learning process to the updating of parameter estimates and neglect the fact that learning is also frequently accompanied by a variable “feeling of knowing” or confidence. The characteristics and the origin of these subjective confidence estimates thus remain largely unknown. Here we investigate whether, during learning, humans not only infer a model of their environment, but also derive an accurate sense of confidence from their inferences. In our experiment, humans estimated the transition probabilities between two visual or auditory stimuli in a changing environment, and reported their mean estimate and their confidence in this report. To formalize the link between both kinds of estimate and assess their accuracy in comparison to a normative reference, we derive the optimal inference strategy for our task. Our results indicate that subjects accurately track the likelihood that their inferences are correct. Learning and estimating confidence in what has been learned appear to be two intimately related abilities, suggesting that they arise from a single inference process. We show that human performance matches several properties of the optimal probabilistic inference. In particular, subjective confidence is impacted by environmental uncertainty, both at the first level (uncertainty in stimulus occurrence given the inferred stochastic characteristics) and at the second level (uncertainty due to unexpected changes in these stochastic characteristics). Confidence also increases appropriately with the number of observations within stable periods. Our results support the idea that humans possess a quantitative sense of confidence in their inferences about abstract non-sensory parameters of the environment. This ability cannot be reduced to simple heuristics, it seems instead a core property of the learning process. PMID:26076466
Cavanagh, Patrick
2011-01-01
Visual cognition, high-level vision, mid-level vision and top-down processing all refer to decision-based scene analyses that combine prior knowledge with retinal input to generate representations. The label “visual cognition” is little used at present, but research and experiments on mid- and high-level, inference-based vision have flourished, becoming in the 21st century a significant, if often understated part, of current vision research. How does visual cognition work? What are its moving parts? This paper reviews the origins and architecture of visual cognition and briefly describes some work in the areas of routines, attention, surfaces, objects, and events (motion, causality, and agency). Most vision scientists avoid being too explicit when presenting concepts about visual cognition, having learned that explicit models invite easy criticism. What we see in the literature is ample evidence for visual cognition, but few or only cautious attempts to detail how it might work. This is the great unfinished business of vision research: at some point we will be done with characterizing how the visual system measures the world and we will have to return to the question of how vision constructs models of objects, surfaces, scenes, and events. PMID:21329719
Continuous, age-related plumage variation in male Kirtland's Warblers
John R. Probst; Deahn M. Donner; Michael A. Bozek
2007-01-01
The ability to age individual birds visually in the field based on plumage variation could provide important demographic and biogeographical information. We describe an approach to infer ages from a distribution of plumage scores of free-ranging male Kirtland's Warblers (Dendroica kinlandii). We assigned ages to males using a scoring scheme (0-...
The design and application of a Transportable Inference Engine (TIE1)
NASA Technical Reports Server (NTRS)
Mclean, David R.
1986-01-01
A Transportable Inference Engine (TIE1) system has been developed by the author as part of the Interactive Experimenter Planning System (IEPS) task which is involved with developing expert systems in support of the Spacecraft Control Programs Branch at Goddard Space Flight Center in Greenbelt, Maryland. Unlike traditional inference engines, TIE1 is written in the C programming language. In the TIE1 system, knowledge is represented by a hierarchical network of objects which have rule frames. The TIE1 search algorithm uses a set of strategies, including backward chaining, to obtain the values of goals. The application of TIE1 to a spacecraft scheduling problem is described. This application involves the development of a strategies interpreter which uses TIE1 to do constraint checking.
Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions
NASA Astrophysics Data System (ADS)
Khoury, Mehdi; Liu, Honghai
This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.
SILVA tree viewer: interactive web browsing of the SILVA phylogenetic guide trees.
Beccati, Alan; Gerken, Jan; Quast, Christian; Yilmaz, Pelin; Glöckner, Frank Oliver
2017-09-30
Phylogenetic trees are an important tool to study the evolutionary relationships among organisms. The huge amount of available taxa poses difficulties in their interactive visualization. This hampers the interaction with the users to provide feedback for the further improvement of the taxonomic framework. The SILVA Tree Viewer is a web application designed for visualizing large phylogenetic trees without requiring the download of any software tool or data files. The SILVA Tree Viewer is based on Web Geographic Information Systems (Web-GIS) technology with a PostgreSQL backend. It enables zoom and pan functionalities similar to Google Maps. The SILVA Tree Viewer enables access to two phylogenetic (guide) trees provided by the SILVA database: the SSU Ref NR99 inferred from high-quality, full-length small subunit sequences, clustered at 99% sequence identity and the LSU Ref inferred from high-quality, full-length large subunit sequences. The Tree Viewer provides tree navigation, search and browse tools as well as an interactive feedback system to collect any kinds of requests ranging from taxonomy to data curation and improving the tool itself.
The 1999 eruption of Shishaldin Volcano, Alaska: Monitoring a distant eruption
Nye, C.J.; Keith, T.E.C.; Eichelberger, J.C.; Miller, T.P.; McNutt, S.R.; Moran, S.; Schneider, D.J.; Dehn, J.; Schaefer, J.R.
2002-01-01
Shishaldin Volcano, in the central Aleutian volcanic arc, became seismically restless during the summer of 1998. Increasing unrest was monitored using a newly installed seismic network, weather satellites, and rare local visual observations. The unrest culminated in large eruptions on 19 April and 22-23 April 1999. The opening phase of the 19 April eruption produced a sub-Plinian column that rose to 16 km before rapidly dissipating. About 80 min into the 19 April event we infer that the eruption style transitioned to vigorous Strombolian fountaining. Exceptionally vigorous seismic tremor heralded the 23 April eruption, which produced a large thermal anomaly observable by satellite, but only a modest, 6-km-high plume. There are no ground-based visual observations of this eruption; however we infer that there was renewed, vigorous Strombolian fountaining. Smaller low-level ash-rich plumes were produced through the end of May 1999. The lava that erupted was evolved basalt with about 49% SiO2. Subsequent field investigations have been unable to find a distinction between deposits from each of the two major eruptive episodes.
Alphabetic letter identification: Effects of perceivability, similarity, and bias☆
Mueller, Shane T.; Weidemann, Christoph T.
2012-01-01
The legibility of the letters in the Latin alphabet has been measured numerous times since the beginning of experimental psychology. To identify the theoretical mechanisms attributed to letter identification, we report a comprehensive review of literature, spanning more than a century. This review revealed that identification accuracy has frequently been attributed to a subset of three common sources: perceivability, bias, and similarity. However, simultaneous estimates of these values have rarely (if ever) been performed. We present the results of two new experiments which allow for the simultaneous estimation of these factors, and examine how the shape of a visual mask impacts each of them, as inferred through a new statistical model. Results showed that the shape and identity of the mask impacted the inferred perceivability, bias, and similarity space of a letter set, but that there were aspects of similarity that were robust to the choice of mask. The results illustrate how the psychological concepts of perceivability, bias, and similarity can be estimated simultaneously, and how each make powerful contributions to visual letter identification. PMID:22036587
A 2D virtual reality system for visual goal-driven navigation in zebrafish larvae
Jouary, Adrien; Haudrechy, Mathieu; Candelier, Raphaël; Sumbre, German
2016-01-01
Animals continuously rely on sensory feedback to adjust motor commands. In order to study the role of visual feedback in goal-driven navigation, we developed a 2D visual virtual reality system for zebrafish larvae. The visual feedback can be set to be similar to what the animal experiences in natural conditions. Alternatively, modification of the visual feedback can be used to study how the brain adapts to perturbations. For this purpose, we first generated a library of free-swimming behaviors from which we learned the relationship between the trajectory of the larva and the shape of its tail. Then, we used this technique to infer the intended displacements of head-fixed larvae, and updated the visual environment accordingly. Under these conditions, larvae were capable of aligning and swimming in the direction of a whole-field moving stimulus and produced the fine changes in orientation and position required to capture virtual prey. We demonstrate the sensitivity of larvae to visual feedback by updating the visual world in real-time or only at the end of the discrete swimming episodes. This visual feedback perturbation caused impaired performance of prey-capture behavior, suggesting that larvae rely on continuous visual feedback during swimming. PMID:27659496
Sunkara, Adhira
2015-01-01
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417
Testing the distinctiveness of visual imagery and motor imagery in a reach paradigm.
Gabbard, Carl; Ammar, Diala; Cordova, Alberto
2009-01-01
We examined the distinctiveness of motor imagery (MI) and visual imagery (VI) in the context of perceived reachability. The aim was to explore the notion that the two visual modes have distinctive processing properties tied to the two-visual-system hypothesis. The experiment included an interference tactic whereby participants completed two tasks at the same time: a visual or motor-interference task combined with a MI or VI-reaching task. We expected increased error would occur when the imaged task and the interference task were matched (e.g., MI with the motor task), suggesting an association based on the assumption that the two tasks were in competition for space on the same processing pathway. Alternatively, if there were no differences, dissociation could be inferred. Significant increases in the number of errors were found when the modalities for the imaged (both MI and VI) task and the interference task were matched. Therefore, it appears that MI and VI in the context of perceived reachability recruit different processing mechanisms.
A Comparative Study of Exact versus Propensity Matching Techniques Using Monte Carlo Simulation
ERIC Educational Resources Information Center
Itang'ata, Mukaria J. J.
2013-01-01
Often researchers face situations where comparative studies between two or more programs are necessary to make causal inferences for informed policy decision-making. Experimental designs employing randomization provide the strongest evidence for causal inferences. However, many pragmatic and ethical challenges may preclude the use of randomized…
Temporal Integration and Inferences About Televised Social Behavior.
ERIC Educational Resources Information Center
Collins, W. Andrew
This paper discusses research on age related aspects of children's processing and comprehension of the narrative content of family oriented television programs. In one study, the temporal integration necessary to make inferences about audiovisually presented information was examined in 254 second, fifth and eighth grade children. Subjects were…
ERIC Educational Resources Information Center
Wessel, Dorothy
A 10-week classroom intervention program was implemented to facilitate the fine-motor development of eight first-grade children assessed as being deficient in motor skills. The program was divided according to five deficits to be remediated: visual motor, visual discrimination, visual sequencing, visual figure-ground, and visual memory. Each area…
Hout, Michael C; Goldinger, Stephen D
2015-01-01
When people look for things in the environment, they use target templates-mental representations of the objects they are attempting to locate-to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers' templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel.
Grapov, Dmitry; Newman, John W
2012-09-01
Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010).
Reinhart, Robert M G; Carlisle, Nancy B; Woodman, Geoffrey F
2014-08-01
Current research suggests that we can watch visual working memory surrender the control of attention early in the process of learning to search for a specific object. This inference is based on the observation that the contralateral delay activity (CDA) rapidly decreases in amplitude across trials when subjects search for the same target object. Here, we tested the alternative explanation that the role of visual working memory does not actually decline across learning, but instead lateralized representations accumulate in both hemispheres across trials and wash out the lateralized CDA. We show that the decline in CDA amplitude occurred even when the target objects were consistently lateralized to a single visual hemifield. Our findings demonstrate that reductions in the amplitude of the CDA during learning are not simply due to the dilution of the CDA from interhemispheric cancellation. Copyright © 2014 Society for Psychophysiological Research.
Hout, Michael C.; Goldinger, Stephen D.
2014-01-01
When people look for things in the environment, they use target templates—mental representations of the objects they are attempting to locate—to guide attention and to assess incoming visual input as potential targets. However, unlike laboratory participants, searchers in the real world rarely have perfect knowledge regarding the potential appearance of targets. In seven experiments, we examined how the precision of target templates affects the ability to conduct visual search. Specifically, we degraded template precision in two ways: 1) by contaminating searchers’ templates with inaccurate features, and 2) by introducing extraneous features to the template that were unhelpful. We recorded eye movements to allow inferences regarding the relative extents to which attentional guidance and decision-making are hindered by template imprecision. Our findings support a dual-function theory of the target template and highlight the importance of examining template precision in visual search. PMID:25214306
Program Supports Scientific Visualization
NASA Technical Reports Server (NTRS)
Keith, Stephan
1994-01-01
Primary purpose of General Visualization System (GVS) computer program is to support scientific visualization of data generated by panel-method computer program PMARC_12 (inventory number ARC-13362) on Silicon Graphics Iris workstation. Enables user to view PMARC geometries and wakes as wire frames or as light shaded objects. GVS is written in C language.
Daemi, Mehdi; Harris, Laurence R; Crawford, J Douglas
2016-01-01
Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations.
An expert system environment for the Generic VHSIC Spaceborne Computer (GVSC)
NASA Astrophysics Data System (ADS)
Cockerham, Ann; Labhart, Jay; Rowe, Michael; Skinner, James
The authors describe a Phase II Phillips Laboratory Small Business Innovative Research (SBIR) program being performed to implement a flexible and general-purpose inference environment for embedded space and avionics applications. This inference environment is being developed in Ada and takes special advantage of the target architecture, the GVSC. The GVSC implements the MIL-STD-1750A ISA and contains enhancements to allow access of up to 8 MBytes of memory. The inference environment makes use of the Merit Enhanced Traversal Engine (METE) algorithm, which employs the latest inference and knowledge representation strategies to optimize both run-time speed and memory utilization.
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Zago, Stefano; Allegri, Nicola; Cristoffanini, Marta; Ferrucci, Roberta; Porta, Mauro; Priori, Alberto
2011-11-01
INTRODUCTION. The Charcot and Bernard case of visual imagery, Monsieur X, is a classic case in the history of neuropsychology. Published in 1883, it has been considered the first case of visual imagery loss due to brain injury. Also in recent times a neurological valence has been given to it. However, the presence of analogous cases of loss of visual imagery in the psychiatric field have led us to hypothesise functional origins rather than organic. METHODS. In order to assess the validity of such an inference, we have compared the symptomatology of Monsieur X with that found in cases of loss of visual mental images, both psychiatric and neurological, presented in literature. RESULTS. The clinical findings show strong assonances of the Monsieur X case with the symptoms manifested over time by the patients with functionally based loss of visual imagery. CONCLUSION. Although Monsieur X's damage was initially interpreted as neurological, reports of similar symptoms in the psychiatric field lead us to postulate a functional cause for his impairment as well.
Techniques for Programming Visual Demonstrations.
ERIC Educational Resources Information Center
Gropper, George L.
Visual demonstrations may be used as part of programs to deliver both content objectives and process objectives. Research has shown that learning of concepts is easier, more accurate, and more broadly applied when it is accompanied by visual examples. The visual examples supporting content learning should emphasize both discrimination and…
Automatic topics segmentation for TV news video
NASA Astrophysics Data System (ADS)
Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad
2017-03-01
Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.
Dyslexia and reasoning: the importance of visual processes.
Bacon, Alison M; Handley, Simon J
2010-08-01
Recent research has suggested that individuals with dyslexia rely on explicit visuospatial representations for syllogistic reasoning while most non-dyslexics opt for an abstract verbal strategy. This paper investigates the role of visual processes in relational reasoning amongst dyslexic reasoners. Expt 1 presents written and verbal protocol evidence to suggest that reasoners with dyslexia generate detailed representations of relational properties and use these to make a visual comparison of objects. Non-dyslexics use a linear array of objects to make a simple transitive inference. Expt 2 examined evidence for the visual-impedance effect which suggests that visual information detracts from reasoning leading to longer latencies and reduced accuracy. While non-dyslexics showed the impedance effects predicted, dyslexics showed only reduced accuracy on problems designed specifically to elicit imagery. Expt 3 presented problems with less semantically and visually rich content. The non-dyslexic group again showed impedance effects, but dyslexics did not. Furthermore, in both studies, visual memory predicted reasoning accuracy for dyslexic participants, but not for non-dyslexics, particularly on problems with highly visual content. The findings are discussed in terms of the importance of visual and semantic processes in reasoning for individuals with dyslexia, and we argue that these processes play a compensatory role, offsetting phonological and verbal memory deficits.
NASA Astrophysics Data System (ADS)
Şahingil, Mehmet C.; Aslan, Murat Š.
2013-10-01
Infrared guided missile seekers utilizing pulse width modulation in target tracking is one of the threats against air platforms. To be able to achieve a "soft-kill" protection of own platform against these type of threats, one needs to examine carefully the seeker operating principle with its special electronic counter-counter measure (ECCM) capability. One of the cost-effective ways of soft kill protection is to use flare decoys in accordance with an optimized dispensing program. Such an optimization requires a good understanding of the threat seeker, capabilities of the air platform and engagement scenario information between them. Modeling and simulation is very powerful tool to achieve a valuable insight and understand the underlying phenomenology. A careful interpretation of simulation results is crucial to infer valuable conclusions from the data. In such an interpretation there are lots of factors (features) which affect the results. Therefore, powerful statistical tools and pattern recognition algorithms are of special interest in the analysis. In this paper, we show how self-organizing maps (SOMs), which is one of those powerful tools, can be used in analyzing the effectiveness of various flare dispensing programs against a PWM seeker. We perform several Monte Carlo runs for a typical engagement scenario in a MATLAB-based simulation environment. In each run, we randomly change the flare dispending program and obtain corresponding class: "successful" or "unsuccessful", depending on whether the corresponding flare dispensing program deceives the seeker or not, respectively. Then, in the analysis phase, we use SOMs to interpret and visualize the results.
FINDS: A fault inferring nonlinear detection system programmers manual, version 3.0
NASA Technical Reports Server (NTRS)
Lancraft, R. E.
1985-01-01
Detailed software documentation of the digital computer program FINDS (Fault Inferring Nonlinear Detection System) Version 3.0 is provided. FINDS is a highly modular and extensible computer program designed to monitor and detect sensor failures, while at the same time providing reliable state estimates. In this version of the program the FINDS methodology is used to detect, isolate, and compensate for failures in simulated avionics sensors used by the Advanced Transport Operating Systems (ATOPS) Transport System Research Vehicle (TSRV) in a Microwave Landing System (MLS) environment. It is intended that this report serve as a programmers guide to aid in the maintenance, modification, and revision of the FINDS software.
Coinductive Logic Programming with Negation
NASA Astrophysics Data System (ADS)
Min, Richard; Gupta, Gopal
We introduce negation into coinductive logic programming (co-LP) via what we term Coinductive SLDNF (co-SLDNF) resolution. We present declarative and operational semantics of co-SLDNF resolution and present their equivalence under the restriction of rationality. Co-LP with co-SLDNF resolution provides a powerful, practical and efficient operational semantics for Fitting's Kripke-Kleene three-valued logic with restriction of rationality. Further, applications of co-SLDNF resolution are also discussed and illustrated where Co-SLDNF resolution allows one to develop elegant implementations of modal logics. Moreover it provides the capability of non-monotonic inference (e.g., predicate Answer Set Programming) that can be used to develop novel and effective first-order modal non-monotonic inference engines.
ERIC Educational Resources Information Center
Gerstner, Jerusha J.; Finney, Sara J.
2013-01-01
Implementation fidelity assessment provides a means of measuring the alignment between the planned program and the implemented program. Unfortunately, the implemented program can differ from the planned program, resulting in ambiguous inferences about the planned program's effectiveness (i.e., it is uncertain if poor results are due to an…
NASA Technical Reports Server (NTRS)
Gupta, S. K.; Tiwari, S. N.
1976-01-01
A simple procedure and computer program were developed for retrieving the surface temperature from the measurement of upwelling infrared radiance in a single spectral region in the atmosphere. The program evaluates the total upwelling radiance at any altitude in the region of the CO fundamental band (2070-2220 1/cm) for several values of surface temperature. Actual surface temperature is inferred by interpolation of the measured upwelling radiance between the computed values of radiance for the same altitude. Sensitivity calculations were made to determine the effect of uncertainty in various surface, atmospheric and experimental parameters on the inferred value of surface temperature. It is found that the uncertainties in water vapor concentration and surface emittance are the most important factors affecting the accuracy of the inferred value of surface temperature.
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
Dye, Matthew W G; Seymour, Jenessa L; Hauser, Peter C
2016-04-01
Deafness results in cross-modal plasticity, whereby visual functions are altered as a consequence of a lack of hearing. Here, we present a reanalysis of data originally reported by Dye et al. (PLoS One 4(5):e5640, 2009) with the aim of testing additional hypotheses concerning the spatial redistribution of visual attention due to deafness and the use of a visuogestural language (American Sign Language). By looking at the spatial distribution of errors made by deaf and hearing participants performing a visuospatial selective attention task, we sought to determine whether there was evidence for (1) a shift in the hemispheric lateralization of visual selective function as a result of deafness, and (2) a shift toward attending to the inferior visual field in users of a signed language. While no evidence was found for or against a shift in lateralization of visual selective attention as a result of deafness, a shift in the allocation of attention from the superior toward the inferior visual field was inferred in native signers of American Sign Language, possibly reflecting an adaptation to the perceptual demands imposed by a visuogestural language.
SciFlo: Semantically-Enabled Grid Workflow for Collaborative Science
NASA Astrophysics Data System (ADS)
Yunck, T.; Wilson, B. D.; Raskin, R.; Manipon, G.
2005-12-01
SciFlo is a system for Scientific Knowledge Creation on the Grid using a Semantically-Enabled Dataflow Execution Environment. SciFlo leverages Simple Object Access Protocol (SOAP) Web Services and the Grid Computing standards (WS-* standards and the Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable SOAP Services, native executables, local command-line scripts, and python codes into a distributed computing flow (a graph of operators). SciFlo's XML dataflow documents can be a mixture of concrete operators (fully bound operations) and abstract template operators (late binding via semantic lookup). All data objects and operators can be both simply typed (simple and complex types in XML schema) and semantically typed using controlled vocabularies (linked to OWL ontologies such as SWEET). By exploiting ontology-enhanced search and inference, one can discover (and automatically invoke) Web Services and operators that have been semantically labeled as performing the desired transformation, and adapt a particular invocation to the proper interface (number, types, and meaning of inputs and outputs). The SciFlo client & server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. A Visual Programming tool is also being developed, but it is not required. Once an analysis has been specified for a granule or day of data, it can be easily repeated with different control parameters and over months or years of data. SciFlo uses and preserves semantics, and also generates and infers new semantic annotations. Specifically, the SciFlo engine uses semantic metadata to understand (infer) what it is doing and potentially improve the data flow; preserves semantics by saving links to the semantics of (metadata describing) the input datasets, related datasets, and the data transformations (algorithms) used to generate downstream products; generates new metadata by allowing the user to add semantic annotations to the generated data products (or simply accept automatically generated provenance annotations); and infers new semantic metadata by understanding and applying logic to the semantics of the data and the transformations performed. Much ontology development still needs to be done but, nevertheless, SciFlo documents provide a substrate for using and preserving more semantics as ontologies develop. We will give a live demonstration of the growing SciFlo network using an example dataflow in which atmospheric temperature and water vapor profiles from three Earth Observing System (EOS) instruments are retrieved using SOAP (geo-location query & data access) services, co-registered, and visually & statistically compared on demand (see http://sciflo.jpl.nasa.gov for more information).
Using Visual Analysis to Evaluate and Refine Multilevel Models of Single-Case Studies
ERIC Educational Resources Information Center
Baek, Eun Kyeng; Petit-Bois, Merlande; Van den Noortgate, Wim; Beretvas, S. Natasha; Ferron, John M.
2016-01-01
In special education, multilevel models of single-case research have been used as a method of estimating treatment effects over time and across individuals. Although multilevel models can accurately summarize the effect, it is known that if the model is misspecified, inferences about the effects can be biased. Concern with the potential for model…
Visual Analysis of Multiple Baseline across Participants Graphs when Change Is Delayed
ERIC Educational Resources Information Center
Lieberman, Rebecca G.; Yoder, Paul J.; Reichow, Brian; Wolery, Mark
2010-01-01
A within-subjects group experimental design was used to test whether three manipulated characteristics of multiple baseline across participants (MBL-P) data showing at least a month delayed change in slope affected experts' inference of a functional relation and agreement on this judgment. Thirty-six experts completed a survey composed of 16 MBL-P…
Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.
2016-01-01
Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630
The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.
Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina
2018-05-23
Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.
Visualization of Concurrent Program Executions
NASA Technical Reports Server (NTRS)
Artho, Cyrille; Havelund, Klaus; Honiden, Shinichi
2007-01-01
Various program analysis techniques are efficient at discovering failures and properties. However, it is often difficult to evaluate results, such as program traces. This calls for abstraction and visualization tools. We propose an approach based on UML sequence diagrams, addressing shortcomings of such diagrams for concurrency. The resulting visualization is expressive and provides all the necessary information at a glance.
Visual Basic Programming Impact on Cognitive Style of College Students: Need for Prerequisites
ERIC Educational Resources Information Center
White, Garry L.
2012-01-01
This research investigated the impact learning a visual programming language, Visual Basic, has on hemispheric cognitive style, as measured by the Hemispheric Mode Indicator (HMI). The question to be answered is: will a computer programming course help students improve their cognitive abilities in order to perform well? The cognitive styles for…
ERIC Educational Resources Information Center
Saltan, Fatih
2017-01-01
Online Algorithm Visualization (OAV) is one of the recent developments in the instructional technology field that aims to help students handle difficulties faced when they begin to learn programming. This study aims to investigate the effect of online algorithm visualization on students' achievement in the introduction to programming course. To…
Dichotic and dichoptic digit perception in normal adults.
Lawfield, Angela; McFarland, Dennis J; Cacace, Anthony T
2011-06-01
Verbally based dichotic-listening experiments and reproduction-mediated response-selection strategies have been used for over four decades to study perceptual/cognitive aspects of auditory information processing and make inferences about hemispheric asymmetries and language lateralization in the brain. Test procedures using dichotic digits have also been used to assess for disorders of auditory processing. However, with this application, limitations exist and paradigms need to be developed to improve specificity of the diagnosis. Use of matched tasks in multiple sensory modalities is a logical approach to address this issue. Herein, we use dichotic listening and dichoptic viewing of visually presented digits for making this comparison. To evaluate methodological issues involved in using matched tasks of dichotic listening and dichoptic viewing in normal adults. A multivariate assessment of the effects of modality (auditory vs. visual), digit-span length (1-3 pairs), response selection (recognition vs. reproduction), and ear/visual hemifield of presentation (left vs. right) on dichotic and dichoptic digit perception. Thirty adults (12 males, 18 females) ranging in age from 18 to 30 yr with normal hearing sensitivity and normal or corrected-to-normal visual acuity. A computerized, custom-designed program was used for all data collection and analysis. A four-way repeated measures analysis of variance (ANOVA) evaluated the effects of modality, digit-span length, response selection, and ear/visual field of presentation. The ANOVA revealed that performances on dichotic listening and dichoptic viewing tasks were dependent on complex interactions between modality, digit-span length, response selection, and ear/visual hemifield of presentation. Correlation analysis suggested a common effect on overall accuracy of performance but isolated only an auditory factor for a laterality index. The variables used in this experiment affected performances in the auditory modality to a greater extent than in the visual modality. The right-ear advantage observed in the dichotic-digits task was most evident when reproduction mediated response selection was used in conjunction with three-digit pairs. This effect implies that factors such as "speech related output mechanisms" and digit-span length (working memory) contribute to laterality effects in dichotic listening performance with traditional paradigms. Thus, the use of multiple-digit pairs to avoid ceiling effects and the application of verbal reproduction as a means of response selection may accentuate the role of nonperceptual factors in performance. Ideally, tests of perceptual abilities should be relatively free of such effects. American Academy of Audiology.
Hierarchy-associated semantic-rule inference framework for classifying indoor scenes
NASA Astrophysics Data System (ADS)
Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei
2016-03-01
Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.
Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.
ERIC Educational Resources Information Center
Shama, Gilli; Dreyfus, Tommy
1994-01-01
Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…
Clapham, Kathleen; Manning, Claire; Williams, Kathryn; O'Brien, Ginger; Sutherland, Margaret
2017-04-01
Despite clear evidence that learning and social opportunities for children with disabilities and special needs are more effective in inclusive not segregated settings, there are few known effective inclusion programs available to children with disabilities, their families or teachers in the early years within Australia. The Kids Together program was developed to support children with disabilities/additional needs aged 0-8 years attending mainstream early learning environments. Using a key worker transdisciplinary team model, the program aligns with the individualised package approach of the National Disability Insurance Scheme (NDIS). This paper reports on the use of a logic model to underpin the process, outcomes and impact evaluation of the Kids Together program. The research team worked across 15 Early Childhood Education and Care (ECEC) centres and in home and community settings. A realist evaluation using mixed methods was undertaken to understand what works, for whom and in what contexts. The development of a logic model provided a structured way to explore how the program was implemented and achieved short, medium and long term outcomes within a complex community setting. Kids Together was shown to be a highly effective and innovative model for supporting the inclusion of children with disabilities/additional needs in a range of environments central for early childhood learning and development. The use of a logic model provided a visual representation of the Kids Together model and its component parts and enabled a theory of change to be inferred, showing how a coordinated and collaborative approached can work across multiple environments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yildirim, Ilker; Jacobs, Robert A
2015-06-01
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
A Spatial Framework for Understanding Population Structure and Admixture.
Bradburd, Gideon S; Ralph, Peter L; Coop, Graham M
2016-01-01
Geographic patterns of genetic variation within modern populations, produced by complex histories of migration, can be difficult to infer and visually summarize. A general consequence of geographically limited dispersal is that samples from nearby locations tend to be more closely related than samples from distant locations, and so genetic covariance often recapitulates geographic proximity. We use genome-wide polymorphism data to build "geogenetic maps," which, when applied to stationary populations, produces a map of the geographic positions of the populations, but with distances distorted to reflect historical rates of gene flow. In the underlying model, allele frequency covariance is a decreasing function of geogenetic distance, and nonlocal gene flow such as admixture can be identified as anomalously strong covariance over long distances. This admixture is explicitly co-estimated and depicted as arrows, from the source of admixture to the recipient, on the geogenetic map. We demonstrate the utility of this method on a circum-Tibetan sampling of the greenish warbler (Phylloscopus trochiloides), in which we find evidence for gene flow between the adjacent, terminal populations of the ring species. We also analyze a global sampling of human populations, for which we largely recover the geography of the sampling, with support for significant histories of admixture in many samples. This new tool for understanding and visualizing patterns of population structure is implemented in a Bayesian framework in the program SpaceMix.
A Spatial Framework for Understanding Population Structure and Admixture
Bradburd, Gideon S.; Ralph, Peter L.; Coop, Graham M.
2016-01-01
Geographic patterns of genetic variation within modern populations, produced by complex histories of migration, can be difficult to infer and visually summarize. A general consequence of geographically limited dispersal is that samples from nearby locations tend to be more closely related than samples from distant locations, and so genetic covariance often recapitulates geographic proximity. We use genome-wide polymorphism data to build “geogenetic maps,” which, when applied to stationary populations, produces a map of the geographic positions of the populations, but with distances distorted to reflect historical rates of gene flow. In the underlying model, allele frequency covariance is a decreasing function of geogenetic distance, and nonlocal gene flow such as admixture can be identified as anomalously strong covariance over long distances. This admixture is explicitly co-estimated and depicted as arrows, from the source of admixture to the recipient, on the geogenetic map. We demonstrate the utility of this method on a circum-Tibetan sampling of the greenish warbler (Phylloscopus trochiloides), in which we find evidence for gene flow between the adjacent, terminal populations of the ring species. We also analyze a global sampling of human populations, for which we largely recover the geography of the sampling, with support for significant histories of admixture in many samples. This new tool for understanding and visualizing patterns of population structure is implemented in a Bayesian framework in the program SpaceMix. PMID:26771578
Comprehension of Spacecraft Telemetry Using Hierarchical Specifications of Behavior
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Joshi, Rajeev
2014-01-01
A key challenge in operating remote spacecraft is that ground operators must rely on the limited visibility available through spacecraft telemetry in order to assess spacecraft health and operational status. We describe a tool for processing spacecraft telemetry that allows ground operators to impose structure on received telemetry in order to achieve a better comprehension of system state. A key element of our approach is the design of a domain-specific language that allows operators to express models of expected system behavior using partial specifications. The language allows behavior specifications with data fields, similar to other recent runtime verification systems. What is notable about our approach is the ability to develop hierarchical specifications of behavior. The language is implemented as an internal DSL in the Scala programming language that synthesizes rules from patterns of specification behavior. The rules are automatically applied to received telemetry and the inferred behaviors are available to ground operators using a visualization interface that makes it easier to understand and track spacecraft state. We describe initial results from applying our tool to telemetry received from the Curiosity rover currently roving the surface of Mars, where the visualizations are being used to trend subsystem behaviors, in order to identify potential problems before they happen. However, the technology is completely general and can be applied to any system that generates telemetry such as event logs.
An error-resistant linguistic protocol for air traffic control
NASA Technical Reports Server (NTRS)
Cushing, Steven
1989-01-01
The research results described here are intended to enhance the effectiveness of the DATALINK interface that is scheduled by the Federal Aviation Administration (FAA) to be deployed during the 1990's to improve the safety of various aspects of aviation. While voice has a natural appeal as the preferred means of communication both among humans themselves and between humans and machines as the form of communication that people find most convenient, the complexity and flexibility of natural language are problematic, because of the confusions and misunderstandings that can arise as a result of ambiguity, unclear reference, intonation peculiarities, implicit inference, and presupposition. The DATALINK interface will avoid many of these problems by replacing voice with vision and speech with written instructions. This report describes results achieved to date on an on-going research effort to refine the protocol of the DATALINK system so as to avoid many of the linguistic problems that still remain in the visual mode. In particular, a working prototype DATALINK simulator system has been developed consisting of an unambiguous, context-free grammar and parser, based on the current air-traffic-control language and incorporated into a visual display involving simulated touch-screen buttons and three levels of menu screens. The system is written in the C programming language and runs on the Macintosh II computer. After reviewing work already done on the project, new tasks for further development are described.
DNA context represents transcription regulation of the gene in mouse embryonic stem cells
NASA Astrophysics Data System (ADS)
Ha, Misook; Hong, Soondo
2016-04-01
Understanding gene regulatory information in DNA remains a significant challenge in biomedical research. This study presents a computational approach to infer gene regulatory programs from primary DNA sequences. Using DNA around transcription start sites as attributes, our model predicts gene regulation in the gene. We find that H3K27ac around TSS is an informative descriptor of the transcription program in mouse embryonic stem cells. We build a computational model inferring the cell-type-specific H3K27ac signatures in the DNA around TSS. A comparison of embryonic stem cell and liver cell-specific H3K27ac signatures in DNA shows that the H3K27ac signatures in DNA around TSS efficiently distinguish the cell-type specific H3K27ac peaks and the gene regulation. The arrangement of the H3K27ac signatures inferred from the DNA represents the transcription regulation of the gene in mESC. We show that the DNA around transcription start sites is associated with the gene regulatory program by specific interaction with H3K27ac.
DNA context represents transcription regulation of the gene in mouse embryonic stem cells.
Ha, Misook; Hong, Soondo
2016-04-14
Understanding gene regulatory information in DNA remains a significant challenge in biomedical research. This study presents a computational approach to infer gene regulatory programs from primary DNA sequences. Using DNA around transcription start sites as attributes, our model predicts gene regulation in the gene. We find that H3K27ac around TSS is an informative descriptor of the transcription program in mouse embryonic stem cells. We build a computational model inferring the cell-type-specific H3K27ac signatures in the DNA around TSS. A comparison of embryonic stem cell and liver cell-specific H3K27ac signatures in DNA shows that the H3K27ac signatures in DNA around TSS efficiently distinguish the cell-type specific H3K27ac peaks and the gene regulation. The arrangement of the H3K27ac signatures inferred from the DNA represents the transcription regulation of the gene in mESC. We show that the DNA around transcription start sites is associated with the gene regulatory program by specific interaction with H3K27ac.
ERIC Educational Resources Information Center
Eid, Chaker; Millham, Richard
2012-01-01
In this paper, we discuss the visual programming approach to teaching introductory programming courses and then compare this approach with that of procedural programming. The involved cognitive levels of students, as beginning students are introduced to different types of programming concepts, are correlated to the learning processes of…
Visual and haptic integration in the estimation of softness of deformable objects
Cellini, Cristiano; Kaim, Lukas; Drewing, Knut
2013-01-01
Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510
Experience Report: Visual Programming in the Real World
NASA Technical Reports Server (NTRS)
Baroth, E.; Hartsough, C
1994-01-01
This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.
ERIC Educational Resources Information Center
Pogrund, Rona L.; Darst, Shannon; Boland, Teryl
2013-01-01
Introduction: The results of a 2009-2010 program evaluation study that examined parents, teachers of students with visual impairments, administrators, and students regarding overall satisfaction with and effectiveness of the short-term programs at a residential school for students who are blind and visually impaired are described. The findings are…
Learning Program for Enhancing Visual Literacy for Non-Design Students Using a CMS to Share Outcomes
ERIC Educational Resources Information Center
Ariga, Taeko; Watanabe, Takashi; Otani, Toshio; Masuzawa, Toshimitsu
2016-01-01
This study proposes a basic learning program for enhancing visual literacy using an original Web content management system (Web CMS) to share students' outcomes in class as a blog post. It seeks to reinforce students' understanding and awareness of the design of visual content. The learning program described in this research focuses on to address…
The visual management system of the Forest Service, USDA
Warren R. Bacon
1979-01-01
The National Forest Landscape Management Program began, as a formal program, at a Servicewide meeting in St. Louis in 1969 in response to growing agency and public concern for the visual resource. It is now an accepted part of National Forest management and is supported by a large and growing foundation of handbooks, research papers, and audio/visual programs. This...
Tools for Understanding Identity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creese, Sadie; Gibson-Robinson, Thomas; Goldsmith, Michael
Identity attribution and enrichment is critical to many aspects of law-enforcement and intelligence gathering; this identity typically spans a number of domains in the natural-world such as biographic information (factual information – e.g. names, addresses), biometric information (e.g. fingerprints) and psychological information. In addition to these natural-world projections of identity, identity elements are projected in the cyber-world. Conversely, undesirable elements may use similar techniques to target individuals for spear-phishing attacks (or worse), and potential targets or their organizations may want to determine how to minimize the attack surface exposed. Our research has been exploring the construction of a mathematical modelmore » for identity that supports such holistic identities. The model captures the ways in which an identity is constructed through a combination of data elements (e.g. a username on a forum, an address, a telephone number). Some of these elements may allow new characteristics to be inferred, hence enriching the holistic view of the identity. An example use-case would be the inference of real names from usernames, the ‘path’ created by inferring new elements of identity is highlighted in the ‘critical information’ panel. Individual attribution exercises can be understood as paths through a number of elements. Intuitively the entire realizable ‘capability’ can be modeled as a directed graph, where the elements are nodes and the inferences are represented by links connecting one or more antecedents with a conclusion. The model can be operationalized with two levels of tool support described in this paper, the first is a working prototype, the second is expected to reach prototype by July 2013: Understanding the Model The tool allows a user to easily determine, given a particular set of inferences and attributes, which elements or inferences are of most value to an investigator (or an attacker). The tool is also able to take into account the difficulty of the inferences, allowing the user to consider different scenarios depending on the perceived resources of the attacker, or to prioritize lines of investigation. It also has a number of interesting visualizations that are designed to aid the user in understanding the model. The tool works by considering the inferences as a graph and runs various graph-theoretic algorithms, with some novel adaptations, in order to deduce various properties. Using the Model To help investigators exploit the model to perform identity attribution, we have developed the Identity Map visualization. For a user-provided set of known starting elements and a set of desired target elements for a given identity, the Identity Map generates investigative workflows as paths through the model. Each path consists of a series of elements and inferences between them that connect the input and output elements. Each path also has an associated confidence level that estimates the reliability of the resulting attribution. Identity Map can help investigators understand the possible ways to make an identification decision and guide them toward the data-collection or analysis steps required to reach that decision.« less
Interactive processing of contrastive expressions by Russian children.
Sekerina, Irina A; Trueswell, John C
2012-04-05
Children's ability to interpret color adjective noun phrases (e.g., red butterfly) as contrastive was examined in an eyetracking study with 6-year-old Russian children. Pitch accent placement (on the adjective red , or on the noun butterfly ) was compared within a visual context containing two red referents (a butterfly and a fox) when only one of them had a contrast member (a purple butterfly) or when both had a contrast member (a purple butterfly and a grey fox). Contrastiveness was enhanced by the Russian-specific 'split constituent' construction (e.g., Red put butterfly . . .) in which a contrastive interpretation of the color term requires pitch accent on the adjective, with the nonsplit sentences serving as control. Regardless of the experimental manipulations, children had to wait until hearing the noun (butterfly) to identify the referent, even in splits. This occurred even under conditions for which the prosody and the visual context allow adult listeners to infer the relevant contrast set and anticipate the referent prior to hearing the noun (accent on the adjective in 1-Contrast scenes). Pitch accent on the adjective did facilitate children's referential processing, but only for the nonsplit constituents. Moreover, visual contexts that encouraged the correct contrast set (1-Contrast) only facilitated referential processing after hearing the noun, even in splits. Further analyses showed that children can anticipate the reference like adults but only when the contrast set is made salient by the preceding supportive discourse, that is, when the inference about the intended contrast set is provided by the preceding utterance.
Cross-orientation suppression in human visual cortex
Heeger, David J.
2011-01-01
Cross-orientation suppression was measured in human primary visual cortex (V1) to test the normalization model. Subjects viewed vertical target gratings (of varying contrasts) with or without a superimposed horizontal mask grating (fixed contrast). We used functional magnetic resonance imaging (fMRI) to measure the activity in each of several hypothetical channels (corresponding to subpopulations of neurons) with different orientation tunings and fit these orientation-selective responses with the normalization model. For the V1 channel maximally tuned to the target orientation, responses increased with target contrast but were suppressed when the horizontal mask was added, evident as a shift in the contrast gain of this channel's responses. For the channel maximally tuned to the mask orientation, a constant baseline response was evoked for all target contrasts when the mask was absent; responses decreased with increasing target contrast when the mask was present. The normalization model provided a good fit to the contrast-response functions with and without the mask. In a control experiment, the target and mask presentations were temporally interleaved, and we found no shift in contrast gain, i.e., no evidence for suppression. We conclude that the normalization model can explain cross-orientation suppression in human visual cortex. The approach adopted here can be applied broadly to infer, simultaneously, the responses of several subpopulations of neurons in the human brain that span particular stimulus or feature spaces, and characterize their interactions. In addition, it allows us to investigate how stimuli are represented by the inferred activity of entire neural populations. PMID:21775720
Is visual short-term memory depthful?
Reeves, Adam; Lei, Quan
2014-03-01
Does visual short-term memory (VSTM) depend on depth, as it might be if information was stored in more than one depth layer? Depth is critical in natural viewing and might be expected to affect retention, but whether this is so is currently unknown. Cued partial reports of letter arrays (Sperling, 1960) were measured up to 700 ms after display termination. Adding stereoscopic depth hardly affected VSTM capacity or decay inferred from total errors. The pattern of transposition errors (letters reported from an uncued row) was almost independent of depth and cue delay. We conclude that VSTM is effectively two-dimensional. Copyright © 2014 Elsevier Ltd. All rights reserved.
2017-04-01
ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH (VISRIDER) PROGRAM TASK 6: POINT CLOUD...To) OCT 2013 – SEP 2014 4. TITLE AND SUBTITLE ADVANCED VISUALIZATION AND INTERACTIVE DISPLAY RAPID INNOVATION AND DISCOVERY EVALUATION RESEARCH...various point cloud visualization techniques for viewing large scale LiDAR datasets. Evaluate their potential use for thick client desktop platforms
ERIC Educational Resources Information Center
Austerweil, Joseph L.; Griffiths, Thomas L.; Palmer, Stephen E.
2017-01-01
How does the visual system recognize images of a novel object after a single observation despite possible variations in the viewpoint of that object relative to the observer? One possibility is comparing the image with a prototype for invariance over a relevant transformation set (e.g., translations and dilations). However, invariance over…
ERIC Educational Resources Information Center
Cheng, Kun-Hung; Tsai, Chin-Chung
2016-01-01
Following a previous study (Cheng & Tsai, 2014. "Computers & Education"), this study aimed to probe the interaction of child-parent shared reading with the augmented reality (AR) picture book in more depth. A series of sequential analyses were thus conducted to infer the behavioral transition diagrams and visualize the continuity…
ERIC Educational Resources Information Center
Arend, Anna M.; Zimmer, Hubert D.
2011-01-01
In the lateralized change detection task, two item arrays are presented, one on each side of the display. Participants have to remember the items in the relevant hemifield and ignore the items in the irrelevant hemifield. A difference wave between contralateral and ipsilateral slow potentials with respect to the relevant items, the contralateral…
Craig M. Thompson; J. Andrew Royle; James D. Garner
2012-01-01
Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or markârecapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the...
ERIC Educational Resources Information Center
Berenson, Mark L.
2013-01-01
There is consensus in the statistical literature that severe departures from its assumptions invalidate the use of regression modeling for purposes of inference. The assumptions of regression modeling are usually evaluated subjectively through visual, graphic displays in a residual analysis but such an approach, taken alone, may be insufficient…
2008-04-07
Chbelazzi. Atteni onal modulation of visial po e,- Iig . .Anmial R/evictr of Neltroscience, 27:611 617, 2004. [26] \\M. Ri(,seiiiihear and T. Popoio. ierar...cortex-like mechanisms. IEEE Transactions on Pattern Analysis and aloinc Intc/byocc.e 2)9(3)AI :A126. 2007. 130] Erik B. Sudderth. Antonio B. Torralba
Analysis of hyperspectral fluorescence images for poultry skin tumor inspection
NASA Astrophysics Data System (ADS)
Kong, Seong G.; Chen, Yud-Ren; Kim, Intaek; Kim, Moon S.
2004-02-01
We present a hyperspectral fluorescence imaging system with a fuzzy inference scheme for detecting skin tumors on poultry carcasses. Hyperspectral images reveal spatial and spectral information useful for finding pathological lesions or contaminants on agricultural products. Skin tumors are not obvious because the visual signature appears as a shape distortion rather than a discoloration. Fluorescence imaging allows the visualization of poultry skin tumors more easily than reflectance. The hyperspectral image samples obtained for this poultry tumor inspection contain 65 spectral bands of fluorescence in the visible region of the spectrum at wavelengths ranging from 425 to 711 nm. The large amount of hyperspectral image data is compressed by use of a discrete wavelet transform in the spatial domain. Principal-component analysis provides an effective compressed representation of the spectral signal of each pixel in the spectral domain. A small number of significant features are extracted from two major spectral peaks of relative fluorescence intensity that have been identified as meaningful spectral bands for detecting tumors. A fuzzy inference scheme that uses a small number of fuzzy rules and Gaussian membership functions successfully detects skin tumors on poultry carcasses. Spatial-filtering techniques are used to significantly reduce false positives.
On compensatory strategies and computational models: the case of pure alexia.
Shallice, Tim
2014-01-01
The article is concerned with inferences from the behaviour of neurological patients to models of normal function. It takes the letter-by-letter reading strategy common in pure alexic patients as an example of the methodological problems involved in making such inferences that compensatory strategies produce. The evidence is discussed on the possible use of three ways the letter-by-letter reading process might operate: "reversed spelling"; the use of the phonological input buffer as a temporary holding store during word building; and the use of serial input to the visual word-form system entirely within the visual-orthographic domain such as in the model of Plaut [1999. A connectionist approach to word reading and acquired dyslexia: Extension to sequential processing. Cognitive Science, 23, 543-568]. The compensatory strategy used by, at least, one pure alexic patient does not fit with the third of these possibilities. On the more general question, it is argued that even if compensatory strategies are being used, the behaviour of neurological patients can be useful for the development and assessment of first-generation information-processing models of normal function, but they are not likely to be useful for the development and assessment of second-generation computational models.
Implicit knowledge of visual uncertainty guides decisions with asymmetric outcomes.
Whiteley, Louise; Sahani, Maneesh
2008-03-06
Perception is an "inverse problem," in which the state of the world must be inferred from the sensory neural activity that results. However, this inference is both ill-posed (Helmholtz, 1856; Marr, 1982) and corrupted by noise (Green & Swets, 1989), requiring the brain to compute perceptual beliefs under conditions of uncertainty. Here we show that human observers performing a simple visual choice task under an externally imposed loss function approach the optimal strategy, as defined by Bayesian probability and decision theory (Berger, 1985; Cox, 1961). In concert with earlier work, this suggests that observers possess a model of their internal uncertainty and can utilize this model in the neural computations that underlie their behavior (Knill & Pouget, 2004). In our experiment, optimal behavior requires that observers integrate the loss function with an estimate of their internal uncertainty rather than simply requiring that they use a modal estimate of the uncertain stimulus. Crucially, they approach optimal behavior even when denied the opportunity to learn adaptive decision strategies based on immediate feedback. Our data thus support the idea that flexible representations of uncertainty are pre-existing, widespread, and can be propagated to decision-making areas of the brain.
Comparative analysis on the selection of number of clusters in community detection
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro; Kabashima, Yoshiyuki
2018-02-01
We conduct a comparative analysis on various estimates of the number of clusters in community detection. An exhaustive comparison requires testing of all possible combinations of frameworks, algorithms, and assessment criteria. In this paper we focus on the framework based on a stochastic block model, and investigate the performance of greedy algorithms, statistical inference, and spectral methods. For the assessment criteria, we consider modularity, map equation, Bethe free energy, prediction errors, and isolated eigenvalues. From the analysis, the tendency of overfit and underfit that the assessment criteria and algorithms have becomes apparent. In addition, we propose that the alluvial diagram is a suitable tool to visualize statistical inference results and can be useful to determine the number of clusters.
Real-time tracking of visually attended objects in virtual environments and its application to LOD.
Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon
2009-01-01
This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.
ERIC Educational Resources Information Center
Poon, K. W.; Li-Tsang, C. W .P.; Weiss, T. P. L.; Rosenblum, S.
2010-01-01
This study aimed to investigate the effect of a computerized visual perception and visual-motor integration training program to enhance Chinese handwriting performance among children with learning difficulties, particularly those with handwriting problems. Participants were 26 primary-one children who were assessed by educational psychologists and…
ERIC Educational Resources Information Center
Griffin, Robert E., Ed.; And Others
This document contains 59 selected papers from the 1996 International Visual Literacy Association (IVLA) conference. Topics include: learning to think visually; information design via the Internet; a program for inner-city at-risk children; dubbing versus subtitling television programs; connecting advertisements and classroom reading through…
ERIC Educational Resources Information Center
Buditjahjanto, I. G. P. Asto; Nurlaela, Luthfiyah; Ekohariadi; Riduwan, Mochamad
2017-01-01
Programming technique is one of the subjects at Vocational High School in Indonesia. This subject contains theory and application of programming utilizing Visual Programming. Students experience some difficulties to learn textual learning. Therefore, it is necessary to develop media as a tool to transfer learning materials. The objectives of this…
Yang, K T; Lin, C C; Chang, L Y
2011-12-01
Visual arts have been used to facilitate the teaching of the United States Accreditation Council for Graduate Medical Education (ACGME) competencies used in some countries. Some medical students may not appreciate the usefulness of incorporating arts in medical education. Therefore, arts programs that can interest medical students are necessary. We initiated and evaluated a visual arts program at the Changhua Christian Hospital in Changhua, Taiwan, with an aim to give the students a short review of visual arts and to interest them in the incorporation of arts in medicine. A total of 110 students in clerkship or internship participated in a visual arts program with emphasis on medicine-related visual arts. Content analysis of the data from the notes made by the instructor from direct observation of students; descriptions during discussions and the written feedback from students at the end of the program was used to evaluate the effect of the program. Anonymous questionnaires were also used for self-assessment of students. Qualitative analysis of the data revealed that the course was interesting to students. Themes emerged including its helpfulness to students in interpreting paintings, enhanced empathy, increased cultural awareness, enhanced observational skills, better team work, listening and communication skills and reduced stress. Ratings on the questionnaire showed similar results. Moreover, students had an increase in their confidence and desire to interpret paintings. The structured visual arts program, with emphasis on medicine-related visual arts and other humanities subjects, was able to attract the attention of medical students. It might be helpful to improve the required skills of ACGME competencies, but further studies are needed to support these conclusions.
McClure, J T; Browning, R T; Vantrease, C M; Bittle, S T
1994-01-01
Previous research suggests that traumatic brain injury (TBI) results in impairment of iconic memory abilities.We would like to acknowledge the contribution of Jeffrey D. Vantrease, who wrote the software program for the Iconic Memory procedure and measurement. This raises serious implications for brain injury rehabilitation. Most cognitive rehabilitation programs do not include iconic memory training. Instead it is common for cognitive rehabilitation programs to focus on attention and concentration skills, memory skills, and visual scanning skills.This study compared the iconic memory skills of brain-injury survivors and control subjects who all reached criterion levels of visual scanning skills. This involved previous training for the brain-injury survivors using popular visual scanning programs that allowed them to visually scan with response time and accuracy within normal limits. Control subjects required only minimal training to reach normal limits criteria. This comparison allows for the dissociation of visual scanning skills and iconic memory skills.The results are discussed in terms of their implications for cognitive rehabilitation and the relationship between visual scanning training and iconic memory skills.
ViSEN: methodology and software for visualization of statistical epistasis networks
Hu, Ting; Chen, Yuanzhu; Kiralis, Jeff W.; Moore, Jason H.
2013-01-01
The non-linear interaction effect among multiple genetic factors, i.e. epistasis, has been recognized as a key component in understanding the underlying genetic basis of complex human diseases and phenotypic traits. Due to the statistical and computational complexity, most epistasis studies are limited to interactions with an order of two. We developed ViSEN to analyze and visualize epistatic interactions of both two-way and three-way. ViSEN not only identifies strong interactions among pairs or trios of genetic attributes, but also provides a global interaction map that shows neighborhood and clustering structures. This visualized information could be very helpful to infer the underlying genetic architecture of complex diseases and to generate plausible hypotheses for further biological validations. ViSEN is implemented in Java and freely available at https://sourceforge.net/projects/visen/. PMID:23468157
Neural network modelling of the influence of channelopathies on reflex visual attention.
Gravier, Alexandre; Quek, Chai; Duch, Włodzisław; Wahab, Abdul; Gravier-Rymaszewska, Joanna
2016-02-01
This paper introduces a model of Emergent Visual Attention in presence of calcium channelopathy (EVAC). By modelling channelopathy, EVAC constitutes an effort towards identifying the possible causes of autism. The network structure embodies the dual pathways model of cortical processing of visual input, with reflex attention as an emergent property of neural interactions. EVAC extends existing work by introducing attention shift in a larger-scale network and applying a phenomenological model of channelopathy. In presence of a distractor, the channelopathic network's rate of failure to shift attention is lower than the control network's, but overall, the control network exhibits a lower classification error rate. The simulation results also show differences in task-relative reaction times between control and channelopathic networks. The attention shift timings inferred from the model are consistent with studies of attention shift in autistic children.
Optic Flow Dominates Visual Scene Polarity in Causing Adaptive Modification of Locomotor Trajectory
NASA Technical Reports Server (NTRS)
Nomura, Y.; Mulavara, A. P.; Richards, J. T.; Brady, R.; Bloomberg, Jacob J.
2005-01-01
Locomotion and posture are influenced and controlled by vestibular, visual and somatosensory information. Optic flow and scene polarity are two characteristics of a visual scene that have been identified as being critical in how they affect perceived body orientation and self-motion. The goal of this study was to determine the role of optic flow and visual scene polarity on adaptive modification in locomotor trajectory. Two computer-generated virtual reality scenes were shown to subjects during 20 minutes of treadmill walking. One scene was a highly polarized scene while the other was composed of objects displayed in a non-polarized fashion. Both virtual scenes depicted constant rate self-motion equivalent to walking counterclockwise around the perimeter of a room. Subjects performed Stepping Tests blindfolded before and after scene exposure to assess adaptive changes in locomotor trajectory. Subjects showed a significant difference in heading direction, between pre and post adaptation stepping tests, when exposed to either scene during treadmill walking. However, there was no significant difference in the subjects heading direction between the two visual scene polarity conditions. Therefore, it was inferred from these data that optic flow has a greater role than visual polarity in influencing adaptive locomotor function.
Encoding color information for visual tracking: Algorithms and benchmark.
Liang, Pengpeng; Blasch, Erik; Ling, Haibin
2015-12-01
While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel
Grapov, Dmitry; Newman, John W.
2012-01-01
Summary: Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Availability and implementation: Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010). Contact: John.Newman@ars.usda.gov Supplementary Information: Installation instructions, tutorials and users manual are available at http://sourceforge.net/projects/imdev/. PMID:22815358
Inferring Interaction Force from Visual Information without Using Physical Force Sensors.
Hwang, Wonjun; Lim, Soo-Chul
2017-10-26
In this paper, we present an interaction force estimation method that uses visual information rather than that of a force sensor. Specifically, we propose a novel deep learning-based method utilizing only sequential images for estimating the interaction force against a target object, where the shape of the object is changed by an external force. The force applied to the target can be estimated by means of the visual shape changes. However, the shape differences in the images are not very clear. To address this problem, we formulate a recurrent neural network-based deep model with fully-connected layers, which models complex temporal dynamics from the visual representations. Extensive evaluations show that the proposed learning models successfully estimate the interaction forces using only the corresponding sequential images, in particular in the case of three objects made of different materials, a sponge, a PET bottle, a human arm, and a tube. The forces predicted by the proposed method are very similar to those measured by force sensors.
SpreaD3: Interactive Visualization of Spatiotemporal History and Trait Evolutionary Processes.
Bielejec, Filip; Baele, Guy; Vrancken, Bram; Suchard, Marc A; Rambaut, Andrew; Lemey, Philippe
2016-08-01
Model-based phylogenetic reconstructions increasingly consider spatial or phenotypic traits in conjunction with sequence data to study evolutionary processes. Alongside parameter estimation, visualization of ancestral reconstructions represents an integral part of these analyses. Here, we present a complete overhaul of the spatial phylogenetic reconstruction of evolutionary dynamics software, now called SpreaD3 to emphasize the use of data-driven documents, as an analysis and visualization package that primarily complements Bayesian inference in BEAST (http://beast.bio.ed.ac.uk, last accessed 9 May 2016). The integration of JavaScript D3 libraries (www.d3.org, last accessed 9 May 2016) offers novel interactive web-based visualization capacities that are not restricted to spatial traits and extend to any discrete or continuously valued trait for any organism of interest. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Motion Direction Biases and Decoding in Human Visual Cortex
Wang, Helena X.; Merriam, Elisha P.; Freeman, Jeremy
2014-01-01
Functional magnetic resonance imaging (fMRI) studies have relied on multivariate analysis methods to decode visual motion direction from measurements of cortical activity. Above-chance decoding has been commonly used to infer the motion-selective response properties of the underlying neural populations. Moreover, patterns of reliable response biases across voxels that underlie decoding have been interpreted to reflect maps of functional architecture. Using fMRI, we identified a direction-selective response bias in human visual cortex that: (1) predicted motion-decoding accuracy; (2) depended on the shape of the stimulus aperture rather than the absolute direction of motion, such that response amplitudes gradually decreased with distance from the stimulus aperture edge corresponding to motion origin; and 3) was present in V1, V2, V3, but not evident in MT+, explaining the higher motion-decoding accuracies reported previously in early visual cortex. These results demonstrate that fMRI-based motion decoding has little or no dependence on the underlying functional organization of motion selectivity. PMID:25209297
Impact of feature saliency on visual category learning.
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.
Impact of feature saliency on visual category learning
Hammer, Rubi
2015-01-01
People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220
Osaka, Naoyuki; Matsuyoshi, Daisuke; Ikeda, Takashi; Osaka, Mariko
2010-03-10
The recent development of cognitive neuroscience has invited inference about the neurosensory events underlying the experience of visual arts involving implied motion. We report functional magnetic resonance imaging study demonstrating activation of the human extrastriate motion-sensitive cortex by static images showing implied motion because of instability. We used static line-drawing cartoons of humans by Hokusai Katsushika (called 'Hokusai Manga'), an outstanding Japanese cartoonist as well as famous Ukiyoe artist. We found 'Hokusai Manga' with implied motion by depicting human bodies that are engaged in challenging tonic posture significantly activated the motion-sensitive visual cortex including MT+ in the human extrastriate cortex, while an illustration that does not imply motion, for either humans or objects, did not activate these areas under the same tasks. We conclude that motion-sensitive extrastriate cortex would be a critical region for perception of implied motion in instability.
VPython: Writing Real-time 3D Physics Programs
NASA Astrophysics Data System (ADS)
Chabay, Ruth
2001-06-01
VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.
Affective Education for Visually Impaired Children.
ERIC Educational Resources Information Center
Locke, Don C.; Gerler, Edwin R., Jr.
1981-01-01
Evaluated the effectiveness of the Human Development Program (HDP) and the Developing Understanding of Self and Others (DUSO) program used with visually impaired children. Although HDP and DUSO affected the behavior of visually impaired children, they did not have any effect on children's attitudes toward school. (RC)
Architectural Visualization of C/C++ Source Code for Program Comprehension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panas, T; Epperly, T W; Quinlan, D
2006-09-01
Structural and behavioral visualization of large-scale legacy systems to aid program comprehension is still a major challenge. The challenge is even greater when applications are implemented in flexible and expressive languages such as C and C++. In this paper, we consider visualization of static and dynamic aspects of large-scale scientific C/C++ applications. For our investigation, we reuse and integrate specialized analysis and visualization tools. Furthermore, we present a novel layout algorithm that permits a compressive architectural view of a large-scale software system. Our layout is unique in that it allows traditional program visualizations, i.e., graph structures, to be seen inmore » relation to the application's file structure.« less
A visual programming environment for the Navier-Stokes computer
NASA Technical Reports Server (NTRS)
Tomboulian, Sherryl; Crockett, Thomas W.; Middleton, David
1988-01-01
The Navier-Stokes computer is a high-performance, reconfigurable, pipelined machine designed to solve large computational fluid dynamics problems. Due to the complexity of the architecture, development of effective, high-level language compilers for the system appears to be a very difficult task. Consequently, a visual programming methodology has been developed which allows users to program the system at an architectural level by constructing diagrams of the pipeline configuration. These schematic program representations can then be checked for validity and automatically translated into machine code. The visual environment is illustrated by using a prototype graphical editor to program an example problem.
Yang, Yea-Ru; Chen, Yi-Hua; Chang, Heng-Chih; Chan, Rai-Chi; Wei, Shun-Hwa; Wang, Ray-Yau
2015-10-01
We investigated the effects of a computer-generated interactive visual feedback training program on the recovery from pusher syndrome in stroke patients. Assessor-blinded, pilot randomized controlled study. A total of 12 stroke patients with pusher syndrome were randomly assigned to either the experimental group (N = 7, computer-generated interactive visual feedback training) or control group (N = 5, mirror visual feedback training). The scale for contraversive pushing for severity of pusher syndrome, the Berg Balance Scale for balance performance, and the Fugl-Meyer assessment scale for motor control were the outcome measures. Patients were assessed pre- and posttraining. A comparison of pre- and posttraining assessment results revealed that both training programs led to the following significant changes: decreased severity of pusher syndrome scores (decreases of 4.0 ± 1.1 and 1.4 ± 1.0 in the experimental and control groups, respectively); improved balance scores (increases of 14.7 ± 4.3 and 7.2 ± 1.6 in the experimental and control groups, respectively); and higher scores for lower extremity motor control (increases of 8.4 ± 2.2 and 5.6 ± 3.3 in the experimental and control groups, respectively). Furthermore, the computer-generated interactive visual feedback training program produced significantly better outcomes in the improvement of pusher syndrome (p < 0.01) and balance (p < 0.05) compared with the mirror visual feedback training program. Although both training programs were beneficial, the computer-generated interactive visual feedback training program more effectively aided recovery from pusher syndrome compared with mirror visual feedback training. © The Author(s) 2014.
Ventromedial Prefrontal Cortex Is Necessary for Normal Associative Inference and Memory Integration.
Spalding, Kelsey N; Schlichting, Margaret L; Zeithamova, Dagmar; Preston, Alison R; Tranel, Daniel; Duff, Melissa C; Warren, David E
2018-04-11
The ability to flexibly combine existing knowledge in response to novel circumstances is highly adaptive. However, the neural correlates of flexible associative inference are not well characterized. Laboratory tests of associative inference have measured memory for overlapping pairs of studied items (e.g., AB, BC) and for nonstudied pairs with common associates (i.e., AC). Findings from functional neuroimaging and neuropsychology suggest the ventromedial prefrontal cortex (vmPFC) may be necessary for associative inference. Here, we used a neuropsychological approach to test the necessity of vmPFC for successful memory-guided associative inference in humans using an overlapping pairs associative memory task. We predicted that individuals with focal vmPFC damage ( n = 5; 3F, 2M) would show impaired inferential memory but intact non-inferential memory. Performance was compared with normal comparison participants ( n = 10; 6F, 4M). Participants studied pairs of visually presented objects including overlapping pairs (AB, BC) and nonoverlapping pairs (XY). Participants later completed a three-alternative forced-choice recognition task for studied pairs (AB, BC, XY) and inference pairs (AC). As predicted, the vmPFC group had intact memory for studied pairs but significantly impaired memory for inferential pairs. These results are consistent with the perspective that the vmPFC is necessary for memory-guided associative inference, indicating that the vmPFC is critical for adaptive abilities that require application of existing knowledge to novel circumstances. Additionally, vmPFC damage was associated with unexpectedly reduced memory for AB pairs post-inference, which could potentially reflect retroactive interference. Together, these results reinforce an emerging understanding of a role for the vmPFC in brain networks supporting associative memory processes. SIGNIFICANCE STATEMENT We live in a constantly changing environment, so the ability to adapt our knowledge to support understanding of new circumstances is essential. One important adaptive ability is associative inference which allows us to extract shared features from distinct experiences and relate them. For example, if we see a woman holding a baby, and later see a man holding the same baby, then we might infer that the two adults are a couple. Despite the importance of associative inference, the brain systems necessary for this ability are not known. Here, we report that damage to human ventromedial prefrontal cortex (vmPFC) disproportionately impairs associative inference. Our findings show the necessity of the vmPFC for normal associative inference and memory integration. Copyright © 2018 the authors 0270-6474/18/383767-09$15.00/0.
The impact of modality and working memory capacity on achievement in a multimedia environment
NASA Astrophysics Data System (ADS)
Stromfors, Charlotte M.
This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.
A Review of Generic Program Visualization Systems for Introductory Programming Education
ERIC Educational Resources Information Center
Sorva, Juha; Karavirta, Ville; Malmi, Lauri
2013-01-01
This article is a survey of program visualization systems intended for teaching beginners about the runtime behavior of computer programs. Our focus is on generic systems that are capable of illustrating many kinds of programs and behaviors. We inclusively describe such systems from the last three decades and review findings from their empirical…
ERIC Educational Resources Information Center
Smith, Philip A.; Webb, Geoffrey I.
2000-01-01
Describes "Glass-box Interpreter" a low-level program visualization tool called Bradman designed to provide a conceptual model of C program execution for novice programmers and makes visible aspects of the programming process normally hidden from the user. Presents an experiment that tests the efficacy of Bradman, and provides…
Seeing it my way: a case of a selective deficit in inhibiting self-perspective.
Samson, Dana; Apperly, Ian A; Kathirgamanathan, Umalini; Humphreys, Glyn W
2005-05-01
Little is known about the functional and neural architecture of social reasoning, one major obstacle being that we crucially lack the relevant tools to test potentially different social reasoning components. In the case of belief reasoning, previous studies have tried to separate the processes involved in belief reasoning per se from those involved in the processing of the high incidental demands such as the working memory demands of typical belief tasks. In this study, we developed new belief tasks in order to disentangle, for the first time, two perspective taking components involved in belief reasoning: (i) the ability to inhibit one's own perspective (self-perspective inhibition); and (ii) the ability to infer someone else's perspective as such (other-perspective taking). The two tasks had similar demands in other-perspective taking as they both required the participant to infer that a character has a false belief about an object's location. However, the tasks varied in the self-perspective inhibition demands. In the task with the lowest self-perspective inhibition demands, at the time the participant had to infer the character's false belief, he or she had no idea what the new object's location was. In contrast, in the task with the highest self-perspective inhibition demands, at the time the participant had to infer the character's false belief, he or she knew where the object was actually located (and this knowledge had thus to be inhibited). The two tasks were presented to a stroke patient, WBA, with right prefrontal and temporal damage. WBA performed well in the low-inhibition false-belief task but showed striking difficulty in the task placing high self-perspective inhibition demands, showing a selective deficit in inhibiting self-perspective. WBA also made egocentric errors in other social and visual perspective taking tasks, indicating a difficulty with belief attribution extending to the attribution of emotions, desires and visual experiences to other people. The case of WBA, together with the recent report of three patients impaired in belief reasoning even when self-perspective inhibition demands were reduced, provide the first neuropsychological evidence that the inhibition of one's own point of view and the ability to infer someone else's point of view rely on distinct neural and functional processes.
Assessing NARCCAP climate model effects using spatial confidence regions.
French, Joshua P; McGinnis, Seth; Schwartzman, Armin
2017-01-01
We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference.
Olsen, Anna; McDonald, David; Lenton, Simon; Dietze, Paul M
2018-05-01
The Bradford Hill criteria for assessing causality are useful in assembling evidence, including within complex policy analyses. In this paper, we argue that the implementation of take-home naloxone (THN) programs in Australia and elsewhere reflects sensible, evidence-based public health policy, despite the absence of randomised controlled trials. However, we also acknowledge that the debate around expanding access to THN would benefit from a careful consideration of causal inference and health policy impact of THN program implementation. Given the continued debate around expanding access to THN, and the relatively recent access to new data from implementation studies, two research groups independently conducted Bradford Hill analyses in order to carefully consider causal inference and health policy impact. Hill's criteria offer a useful analytical tool for interpreting current evidence on THN programs and making decisions about the (un)certainty of THN program safety and effectiveness. © 2017 Australasian Professional Society on Alcohol and other Drugs.
The CCH Vision Stimulation Program for Infants with Low Vision: Preliminary Results.
ERIC Educational Resources Information Center
Leguire, L. E.; And Others
1992-01-01
This study evaluated the Columbus (Ohio) Children's Hospital vision stimulation program, involving in-home intervention with 15 visually impaired infants. Comparison with controls indicated benefits of appropriate vision stimulation in increasing the neural foundation for vision and visual-motor function in visually impaired infants. (Author/DB)
The role of the hippocampus in transitive inference
Zalesak, Martin; Heckers, Stephan
2009-01-01
Transitive inference (TI) is the ability to infer the relationship between items (e.g., A>C) after having learned a set of premise pairs (e.g., A>B and B>C). Previous studies in humans have identified a distributed neural network, including cortex, hippocampus, and thalamus, during TI judgments. We studied two aspects of TI using fMRI of subjects who had acquired the 6-item sequence (A>B>C>D>E>F) of visual stimuli. First, the identification of novel pairs not containing end items (i.e., B>D, C>E, B>E) was associated with greater left hippocampal activation when compared to the identification of novel pairs containing end items A and F. This demonstrates that the identification of stimulus pairs requiring the flexible representation of a sequence is associated with hippocampal activation. Second, for the three novel pairs devoid of end items we found greater right hippocampal activation for pairs B>D and C>E compared with pair B>E. This indicates that TI decisions on pairs derived from more adjacent items in the sequence are associated with greater hippocampal activation. Hippocampal activation thus scales with the degree of relational processing necessary for TI judgments. Both findings confirm a role of the hippocampus in transitive inference in humans. PMID:19216061
Biologically Inspired Model for Inference of 3D Shape from Texture
Gomez, Olman; Neumann, Heiko
2016-01-01
A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer. PMID:27649387
Causal strength induction from time series data.
Soo, Kevin W; Rottman, Benjamin M
2018-04-01
One challenge when inferring the strength of cause-effect relations from time series data is that the cause and/or effect can exhibit temporal trends. If temporal trends are not accounted for, a learner could infer that a causal relation exists when it does not, or even infer that there is a positive causal relation when the relation is negative, or vice versa. We propose that learners use a simple heuristic to control for temporal trends-that they focus not on the states of the cause and effect at a given instant, but on how the cause and effect change from one observation to the next, which we call transitions. Six experiments were conducted to understand how people infer causal strength from time series data. We found that participants indeed use transitions in addition to states, which helps them to reach more accurate causal judgments (Experiments 1A and 1B). Participants use transitions more when the stimuli are presented in a naturalistic visual format than a numerical format (Experiment 2), and the effect of transitions is not driven by primacy or recency effects (Experiment 3). Finally, we found that participants primarily use the direction in which variables change rather than the magnitude of the change for estimating causal strength (Experiments 4 and 5). Collectively, these studies provide evidence that people often use a simple yet effective heuristic for inferring causal strength from time series data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Accident/Mishap Investigation System
NASA Technical Reports Server (NTRS)
Keller, Richard; Wolfe, Shawn; Gawdiak, Yuri; Carvalho, Robert; Panontin, Tina; Williams, James; Sturken, Ian
2007-01-01
InvestigationOrganizer (IO) is a Web-based collaborative information system that integrates the generic functionality of a database, a document repository, a semantic hypermedia browser, and a rule-based inference system with specialized modeling and visualization functionality to support accident/mishap investigation teams. This accessible, online structure is designed to support investigators by allowing them to make explicit, shared, and meaningful links among evidence, causal models, findings, and recommendations.
The Use of Spatial Cognition in Graph Interpretation
2007-08-01
Mathematics has emphasized the importance of proactively teaching students of all ages to interpret graphs and use them to make inferences ( NCTM ... Mathematics . Reston, VA: National Council of Teachers of Mathematics . Oh, S., & Kim, M. (2004). The role of spatial working memory in visual...in learning science (Schunn et al, in press). Not coincidentally, in developing its recent national standards, the National Council of Teachers of
Li, B; Chan, E C Y
2003-01-01
We present an approach to customize the sample submission process for high-throughput purification (HTP) of combinatorial parallel libraries using preparative liquid chromatography electrospray ionization mass spectrometry. In this study, Visual Basic and Visual Basic for Applications programs were developed using Microsoft Visual Basic 6 and Microsoft Excel 2000, respectively. These programs are subsequently applied for the seamless electronic submission and handling of data for HTP. Functions were incorporated into these programs where medicinal chemists can perform on-line verification of the purification status and on-line retrieval of postpurification data. The application of these user friendly and cost effective programs in our HTP technology has greatly increased our work efficiency by reducing paper work and manual manipulation of data.
Monitoring of adult Lost River and shortnose suckers in Clear Lake Reservoir, California, 2008–2010
Hewitt, David A.; Hayes, Brian S.
2013-01-01
Problems with inferring status and population dynamics from size composition data can be overcome by a robust capture-recapture program that follows the histories of PIT-tagged individuals. Inferences from such a program are currently hindered by poor detection rates during spawning seasons with low flows in Willow Creek, which indicate that a key assumption of capture-recapture models is violated. We suggest that the most straightforward solution to this issue would be to collect detection data during the spawning season using remote PIT tag antennas in the strait between the west and east lobes of the lake.
Stan : A Probabilistic Programming Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A; Robert, Christian P; Marin, Jean-Michel; Balding, David J; Guillemaud, Thomas; Estoup, Arnaud
2008-12-01
Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc.
Stan : A Probabilistic Programming Language
Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...
2017-01-01
Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less
Stuart, Samuel; Lord, Sue; Galna, Brook; Rochester, Lynn
2018-04-01
Gait impairment is a core feature of Parkinson's disease (PD) with implications for falls risk. Visual cues improve gait in PD, but the underlying mechanisms are unclear. Evidence suggests that attention and vision play an important role; however, the relative contribution from each is unclear. Measurement of visual exploration (specifically saccade frequency) during gait allows for real-time measurement of attention and vision. Understanding how visual cues influence visual exploration may allow inferences of the underlying mechanisms to response which could help to develop effective therapeutics. This study aimed to examine saccade frequency during gait in response to a visual cue in PD and older adults and investigate the roles of attention and vision in visual cue response in PD. A mobile eye-tracker measured saccade frequency during gait in 55 people with PD and 32 age-matched controls. Participants walked in a straight line with and without a visual cue (50 cm transverse lines) presented under single task and dual-task (concurrent digit span recall). Saccade frequency was reduced when walking in PD compared to controls; however, visual cues ameliorated saccadic deficit. Visual cues significantly increased saccade frequency in both PD and controls under both single task and dual-task. Attention rather than visual function was central to saccade frequency and gait response to visual cues in PD. In conclusion, this study highlights the impact of visual cues on visual exploration when walking and the important role of attention in PD. Understanding these complex features will help inform intervention development. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Profile of Personnel Preparation Programs in Visual Impairment and Their Faculty
ERIC Educational Resources Information Center
Ambrose-Zaken, Grace; Bozeman, Laura
2010-01-01
This survey of university personnel preparation programs in visual impairment in the United States and Canada investigated the demographic characteristics of faculty members and programs, instructional models, and funding formulas in 2007-08. It found that many programs used some form of distance education and that there was a correlation between…
2018-02-15
address the problem that probabilistic inference algorithms are diÿcult and tedious to implement, by expressing them in terms of a small number of...building blocks, which are automatic transformations on probabilistic programs. On one hand, our curation of these building blocks reflects the way human...reasoning with low-level computational optimization, so the speed and accuracy of the generated solvers are competitive with state-of-the-art systems. 15
Using Visual Basic to Teach Programming for Geographers.
ERIC Educational Resources Information Center
Slocum, Terry A.; Yoder, Stephen C.
1996-01-01
Outlines reasons why computer programming should be taught to geographers. These include experience using macro (scripting) languages and sophisticated visualization software, and developing a deeper understanding of general hardware and software capabilities. Discusses the distinct advantages and few disadvantages of the programming language…
NASA Astrophysics Data System (ADS)
Whitford, Dennis J.
2002-05-01
Ocean waves are the most recognized phenomena in oceanography. Unfortunately, undergraduate study of ocean wave dynamics and forecasting involves mathematics and physics and therefore can pose difficulties with some students because of the subject's interrelated dependence on time and space. Verbal descriptions and two-dimensional illustrations are often insufficient for student comprehension. Computer-generated visualization and animation offer a visually intuitive and pedagogically sound medium to present geoscience, yet there are very few oceanographic examples. A two-part article series is offered to explain ocean wave forecasting using computer-generated visualization and animation. This paper, Part 1, addresses forecasting of sea wave conditions and serves as the basis for the more difficult topic of swell wave forecasting addressed in Part 2. Computer-aided visualization and animation, accompanied by oral explanation, are a welcome pedagogical supplement to more traditional methods of instruction. In this article, several MATLAB ® software programs have been written to visualize and animate development and comparison of wave spectra, wave interference, and forecasting of sea conditions. These programs also set the stage for the more advanced and difficult animation topics in Part 2. The programs are user-friendly, interactive, easy to modify, and developed as instructional tools. By using these software programs, teachers can enhance their instruction of these topics with colorful visualizations and animation without requiring an extensive background in computer programming.
A Logic Programming Testbed for Inductive Thought and Specification.
ERIC Educational Resources Information Center
Neff, Norman D.
This paper describes applications of logic programming technology to the teaching of the inductive method in computer science and mathematics. It discusses the nature of inductive thought and its place in those fields of inquiry, arguing that a complete logic programming system for supporting inductive inference is not only feasible but necessary.…
Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M
2017-11-01
The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.
NASA Astrophysics Data System (ADS)
Motani, Ryosuke
2005-01-01
Ichthyosaurs were a group of Mesozoic marine reptiles that evolved fish-shaped body outlines. They are unique in several anatomical characters, including the possession of enormous eyeballs sometimes exceeding 25 cm and an enlarged manus with sometimes up to 20 bones in a single digit, or 10 digits per manus. They are also unique in that their biology has been studied from the perspective of physical constraints, which allowed estimation of such characteristics as optimal cruising speed, visual sensitivity, and even possible basal metabolic rate ranges. These functional inferences, although based on physical principles, obviously contain errors arising from the limitations of fossilized data, but are necessarily stronger than the commonly made inferences based on superficial correlations among quantities without mechanical or optical explanations for why such correlations exist.
Image pattern recognition supporting interactive analysis and graphical visualization
NASA Technical Reports Server (NTRS)
Coggins, James M.
1992-01-01
Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.
Visualizing time-related data in biology, a review
Secrier, Maria; Schneider, Reinhard
2014-01-01
Time is of the essence in biology as in so much else. For example, monitoring disease progression or the timing of developmental defects is important for the processes of drug discovery and therapy trials. Furthermore, an understanding of the basic dynamics of biological phenomena that are often strictly time regulated (e.g. circadian rhythms) is needed to make accurate inferences about the evolution of biological processes. Recent advances in technologies have enabled us to measure timing effects more accurately and in more detail. This has driven related advances in visualization and analysis tools that try to effectively exploit this data. Beyond timeline plots, notable attempts at more involved temporal interpretation have been made in recent years, but awareness of the available resources is still limited within the scientific community. Here, we review some advances in biological visualization of time-driven processes and consider how they aid data analysis and interpretation. PMID:23585583
Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko
2016-06-01
Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
Visualizing Internet routing changes.
Lad, Mohit; Massey, Dan; Zhang, Lixia
2006-01-01
Today's Internet provides a global data delivery service to millions of end users and routing protocols play a critical role in this service. It is important to be able to identify and diagnose any problems occurring in Internet routing. However, the Internet's sheer size makes this task difficult. One cannot easily extract out the most important or relevant routing information from the large amounts of data collected from multiple routers. To tackle this problem, we have developed Link-Rank, a tool to visualize Internet routing changes at the global scale. Link-Rank weighs links in a topological graph by the number of routes carried over each link and visually captures changes in link weights in the form of a topological graph with adjustable size. Using Link-Rank, network operators can easily observe important routing changes from massive amounts of routing data, discover otherwise unnoticed routing problems, understand the impact of topological events, and infer root causes of observed routing changes.
Thinking in z-space: flatness and spatial narrativity
NASA Astrophysics Data System (ADS)
Zone, Ray
2012-03-01
Now that digital technology has accessed the Z-space in cinema, narrative artistry is at a loss. Motion picture professionals no longer can readily resort to familiar tools. A new language and new linguistics for Z-axis storytelling are necessary. After first examining the roots of monocular thinking in painting, prior modes of visual narrative in twodimensional cinema obviating true binocular stereopsis can be explored, particularly montage, camera motion and depth of field, with historic examples. Special attention is paid to the manner in which monocular cues for depth have been exploited to infer depth on a planar screen. Both the artistic potential and visual limitations of actual stereoscopic depth as a filmmaking language are interrogated. After an examination of the historic basis of monocular thinking in visual culture, a context for artistic exploration of the use of the z-axis as a heightened means of creating dramatic and emotional impact upon the viewer is illustrated.
NASA Astrophysics Data System (ADS)
Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao
2018-01-01
Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.
Interactive processing of contrastive expressions by Russian children
Sekerina, Irina A.; Trueswell, John C.
2013-01-01
Children's ability to interpret color adjective noun phrases (e.g., red butterfly) as contrastive was examined in an eyetracking study with 6-year-old Russian children. Pitch accent placement (on the adjective red, or on the noun butterfly) was compared within a visual context containing two red referents (a butterfly and a fox) when only one of them had a contrast member (a purple butterfly) or when both had a contrast member (a purple butterfly and a grey fox). Contrastiveness was enhanced by the Russian-specific ‘split constituent’ construction (e.g., Red put butterfly . . .) in which a contrastive interpretation of the color term requires pitch accent on the adjective, with the nonsplit sentences serving as control. Regardless of the experimental manipulations, children had to wait until hearing the noun (butterfly) to identify the referent, even in splits. This occurred even under conditions for which the prosody and the visual context allow adult listeners to infer the relevant contrast set and anticipate the referent prior to hearing the noun (accent on the adjective in 1-Contrast scenes). Pitch accent on the adjective did facilitate children's referential processing, but only for the nonsplit constituents. Moreover, visual contexts that encouraged the correct contrast set (1-Contrast) only facilitated referential processing after hearing the noun, even in splits. Further analyses showed that children can anticipate the reference like adults but only when the contrast set is made salient by the preceding supportive discourse, that is, when the inference about the intended contrast set is provided by the preceding utterance. PMID:24465066
Deaf children's use of clear visual cues in mindreading.
Hao, Jian; Su, Yanjie
2014-11-01
Previous studies show that typically developing 4-year old children can understand other people's false beliefs but that deaf children of hearing families have difficulty in understanding false beliefs until the age of approximately 13. Because false beliefs are implicit mental states that are not expressed through clear visual cues in standard false belief tasks, the present study examines the hypothesis that the deaf children's developmental delay in understanding false beliefs may reflect their difficulty in understanding a spectrum of mental states that are not expressed through clear visual cues. Nine- to 13-year-old deaf children of hearing families and 4-6-year-old typically developing children completed false belief tasks and emotion recognition tasks under different cue conditions. The results indicated that after controlling for the effect of the children's language abilities, the deaf children inferred other people's false beliefs as accurately as the typically developing children when other people's false beliefs were clearly expressed through their eye-gaze direction. However, the deaf children performed worse than the typically developing children when asked to infer false beliefs with ambiguous or no eye-gaze cues. Moreover, the deaf children were capable of recognizing other people's emotions that were clearly conveyed by their facial or body expressions. The results suggest that although theory-based or simulation-based mental state understanding is typical of hearing children's theory of mind mechanism, for deaf children of hearing families, clear cue-based mental state understanding may be their specific theory of mind mechanism. Copyright © 2014 Elsevier Ltd. All rights reserved.
Dendroscope: An interactive viewer for large phylogenetic trees
Huson, Daniel H; Richter, Daniel C; Rausch, Christian; Dezulian, Tobias; Franz, Markus; Rupp, Regula
2007-01-01
Background Research in evolution requires software for visualizing and editing phylogenetic trees, for increasingly very large datasets, such as arise in expression analysis or metagenomics, for example. It would be desirable to have a program that provides these services in an effcient and user-friendly way, and that can be easily installed and run on all major operating systems. Although a large number of tree visualization tools are freely available, some as a part of more comprehensive analysis packages, all have drawbacks in one or more domains. They either lack some of the standard tree visualization techniques or basic graphics and editing features, or they are restricted to small trees containing only tens of thousands of taxa. Moreover, many programs are diffcult to install or are not available for all common operating systems. Results We have developed a new program, Dendroscope, for the interactive visualization and navigation of phylogenetic trees. The program provides all standard tree visualizations and is optimized to run interactively on trees containing hundreds of thousands of taxa. The program provides tree editing and graphics export capabilities. To support the inspection of large trees, Dendroscope offers a magnification tool. The software is written in Java 1.4 and installers are provided for Linux/Unix, MacOS X and Windows XP. Conclusion Dendroscope is a user-friendly program for visualizing and navigating phylogenetic trees, for both small and large datasets. PMID:18034891
ERIC Educational Resources Information Center
Yildiz, Mehmet Ali; Duy, Baki
2013-01-01
The purpose of this study was to investigate the effectiveness of an interpersonal communication skills psycho-education program to improve empathy and communication skills of visually impaired adolescents. Participants of the study were sixteen early adolescents schooling in an elementary school for visually impaired youth in Diyarbakir. The…
Exploring the Disjunctures between Theory and Practice in Community College Visual Arts Programs
ERIC Educational Resources Information Center
Holland, Arnold
2012-01-01
This study explored the perceptions of ten community college visual arts faculty in five different community college settings with regard to the theory and practice disjunctures they were experiencing in their roles as instructors teaching foundational level courses within visual arts programs. The study illuminated the responses of community…
A Parent Training Program for Increasing the Visual Development of School-Aged Children.
ERIC Educational Resources Information Center
Dikowski, Timothy J.
This practicum provided training for 50 parents of children receiving clinic services for visual processing disorders and provided information on visual disorders to the children's teachers. The 8-month program involved 13 parent training sessions. These sessions focused on such topics as: current research findings on vision; identification of…
Preservice Preparation of Teachers of the Visually Handicapped in a Rural State.
ERIC Educational Resources Information Center
Alcorn, Dewaine A.
The University of Nebraska (with the help of the University of Northern Colorado) has established a program to provide practicing teachers with training qualifying them for the teaching certificate endorsement for teaching the visually handicapped. The program was especially designed to fill a state need for teachers of the visually handicapped…
Training Teachers of Visually Impaired Children in Rural Tennessee.
ERIC Educational Resources Information Center
Trent, S. D.
1992-01-01
A Tennessee program awards stipends to teachers to attend summer classes and a practicum and earn 18 hours of credit in education of children with visual impairments. The program requires that teachers have assurance from their superintendents that they will teach visually impaired students in their school systems after endorsement. (Author/JDD)
ERIC Educational Resources Information Center
Stelmack, Joan A.; Rinne, Stephen; Mancil, Rickilyn M.; Dean, Deborah; Moran, D'Anna; Tang, X. Charlene; Cummings, Roger; Massof, Robert W.
2008-01-01
A low vision rehabilitation program with a structured curriculum was evaluated in a randomized controlled trial. The treatment group demonstrated large improvements in self-reported visual function (reading, mobility, visual information processing, visual motor skills, and overall). The team approach and the protocols of the treatment program are…
Kim, Bumhwi; Ban, Sang-Woo; Lee, Minho
2013-10-01
Humans can efficiently perceive arbitrary visual objects based on an incremental learning mechanism with selective attention. This paper proposes a new task specific top-down attention model to locate a target object based on its form and color representation along with a bottom-up saliency based on relativity of primitive visual features and some memory modules. In the proposed model top-down bias signals corresponding to the target form and color features are generated, which draw the preferential attention to the desired object by the proposed selective attention model in concomitance with the bottom-up saliency process. The object form and color representation and memory modules have an incremental learning mechanism together with a proper object feature representation scheme. The proposed model includes a Growing Fuzzy Topology Adaptive Resonance Theory (GFTART) network which plays two important roles in object color and form biased attention; one is to incrementally learn and memorize color and form features of various objects, and the other is to generate a top-down bias signal to localize a target object by focusing on the candidate local areas. Moreover, the GFTART network can be utilized for knowledge inference which enables the perception of new unknown objects on the basis of the object form and color features stored in the memory during training. Experimental results show that the proposed model is successful in focusing on the specified target objects, in addition to the incremental representation and memorization of various objects in natural scenes. In addition, the proposed model properly infers new unknown objects based on the form and color features of previously trained objects. Copyright © 2013 Elsevier Ltd. All rights reserved.
Lee, Wei-Po; Hsiao, Yu-Ting; Hwang, Wei-Che
2014-01-16
To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks.
2014-01-01
Background To improve the tedious task of reconstructing gene networks through testing experimentally the possible interactions between genes, it becomes a trend to adopt the automated reverse engineering procedure instead. Some evolutionary algorithms have been suggested for deriving network parameters. However, to infer large networks by the evolutionary algorithm, it is necessary to address two important issues: premature convergence and high computational cost. To tackle the former problem and to enhance the performance of traditional evolutionary algorithms, it is advisable to use parallel model evolutionary algorithms. To overcome the latter and to speed up the computation, it is advocated to adopt the mechanism of cloud computing as a promising solution: most popular is the method of MapReduce programming model, a fault-tolerant framework to implement parallel algorithms for inferring large gene networks. Results This work presents a practical framework to infer large gene networks, by developing and parallelizing a hybrid GA-PSO optimization method. Our parallel method is extended to work with the Hadoop MapReduce programming model and is executed in different cloud computing environments. To evaluate the proposed approach, we use a well-known open-source software GeneNetWeaver to create several yeast S. cerevisiae sub-networks and use them to produce gene profiles. Experiments have been conducted and the results have been analyzed. They show that our parallel approach can be successfully used to infer networks with desired behaviors and the computation time can be largely reduced. Conclusions Parallel population-based algorithms can effectively determine network parameters and they perform better than the widely-used sequential algorithms in gene network inference. These parallel algorithms can be distributed to the cloud computing environment to speed up the computation. By coupling the parallel model population-based optimization method and the parallel computational framework, high quality solutions can be obtained within relatively short time. This integrated approach is a promising way for inferring large networks. PMID:24428926
The cost of misremembering: Inferring the loss function in visual working memory.
Sims, Chris R
2015-03-04
Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.
Data analysis using scale-space filtering and Bayesian probabilistic reasoning
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Kutulakos, Kiriakos; Robinson, Peter
1991-01-01
This paper describes a program for analysis of output curves from Differential Thermal Analyzer (DTA). The program first extracts probabilistic qualitative features from a DTA curve of a soil sample, and then uses Bayesian probabilistic reasoning to infer the mineral in the soil. The qualifier module employs a simple and efficient extension of scale-space filtering suitable for handling DTA data. We have observed that points can vanish from contours in the scale-space image when filtering operations are not highly accurate. To handle the problem of vanishing points, perceptual organizations heuristics are used to group the points into lines. Next, these lines are grouped into contours by using additional heuristics. Probabilities are associated with these contours using domain-specific correlations. A Bayes tree classifier processes probabilistic features to infer the presence of different minerals in the soil. Experiments show that the algorithm that uses domain-specific correlation to infer qualitative features outperforms a domain-independent algorithm that does not.
An argument for mechanism-based statistical inference in cancer
Ochs, Michael; Price, Nathan D.; Tomasetti, Cristian; Younes, Laurent
2015-01-01
Cancer is perhaps the prototypical systems disease, and as such has been the focus of extensive study in quantitative systems biology. However, translating these programs into personalized clinical care remains elusive and incomplete. In this perspective, we argue that realizing this agenda—in particular, predicting disease phenotypes, progression and treatment response for individuals—requires going well beyond standard computational and bioinformatics tools and algorithms. It entails designing global mathematical models over network-scale configurations of genomic states and molecular concentrations, and learning the model parameters from limited available samples of high-dimensional and integrative omics data. As such, any plausible design should accommodate: biological mechanism, necessary for both feasible learning and interpretable decision making; stochasticity, to deal with uncertainty and observed variation at many scales; and a capacity for statistical inference at the patient level. This program, which requires a close, sustained collaboration between mathematicians and biologists, is illustrated in several contexts, including learning bio-markers, metabolism, cell signaling, network inference and tumorigenesis. PMID:25381197
Visual Basic programs for spreadsheet analysis.
Hunt, Bruce
2005-01-01
A collection of Visual Basic programs, entitled Function.xls, has been written for ground water spreadsheet calculations. This collection includes programs for calculating mathematical functions and for evaluating analytical solutions in ground water hydraulics and contaminant transport. Several spreadsheet examples are given to illustrate their use.
G. F. Parrot and the theory of unconscious inferences.
Allik, Jüri; Konstabel, Kenn
2005-01-01
In 1839, Georg Friedrich Parrot (1767-1852) published a short note about a peculiar visual phenomenon--the diminishing of the size of external objects situated at a relatively small distance from the window of a fast-moving train. For the explanation of this illusion, Parrot proposed a concept of unconscious inferences, a very rapid syllogistic conclusion from two premises, which anticipated the revival of Alhazen's theory of unconscious inferences by Hermann von Helmholtz, Wilhelm Wundt, and John Stuart Mill. He also advanced the notion that the speed of mental processes is not infinitely high and that it can be measured by means of systematic experimentation. Although Parrot was only partly correct in the description of the movement-induced changes of the perceived size, his general intention to understand basic mechanisms of the human mind was in harmony with the founding ideas of experimental psychology: it is possible to study the phenomena of the mind in the same general way that the physical world is studied, either in terms of mechanical or mathematical laws. 2005 Wiley Periodicals, Inc.
The role of interoceptive inference in theory of mind.
Ondobaka, Sasha; Kilner, James; Friston, Karl
2017-03-01
Inferring the intentions and beliefs of another is an ability that is fundamental for social and affiliative interactions. A substantial amount of empirical evidence suggests that making sense of another's intentional and belief states (i.e. theory of mind) relies on exteroceptive (e.g. visual and auditory) and proprioceptive (i.e. motor) signals. Yet, despite its pivotal role in the guidance of behaviour, the role of the observer's interoceptive (visceral) processing in understanding another's internal states remains unexplored. Predicting and keeping track of interoceptive bodily states - which inform intentions and beliefs that guide behaviour - is one of the fundamental purposes of the human brain. In this paper, we will focus on the role of interoceptive predictions, prescribed by the free energy principle, in making sense of internal states that cause another's behaviour. We will discuss how multimodal expectations induced at deep (high) hierarchical levels - that necessarily entail interoceptive predictions - contribute to inference about others that is at the heart of theory of mind. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wei, Liew Tze; Sazilah, Salam
2012-01-01
This study investigated the effects of visual cues in multiple external representations (MER) environment on the learning performance of novices' program comprehension. Program codes and flowchart diagrams were used as dual representations in multimedia environment to deliver lessons on C-Programming. 17 field independent participants and 16 field…
ERIC Educational Resources Information Center
Simpkins, N. K.
2014-01-01
This article reports an investigation into undergraduate student experiences and views of a visual or "blocks" based programming language and its environment. An additional and central aspect of this enquiry is to substantiate the perceived degree of transferability of programming skills learnt within the visual environment to a typical…
Schweikert, Lorian E; Grace, Michael S
Fish that undergo ontogenetic migrations between habitats often encounter new light environments that require changes in the spectral sensitivity of the retina. For many fish, sensitivity of the retina changes to match the environmental spectrum, but the timing of retinal change relative to habitat shift remains unknown. Does retinal change in fish precede habitat shift, or is it a response to encountered changes in environmental light? Spectral sensitivity changes were examined over the development of the Atlantic tarpon (Megalops atlanticus) retina relative to ontogenetic shifts in habitat light. Opsin gene isoform expression and inferred chromophore use of visual pigments were examined over the course of M. atlanticus development. Spectral sensitivity of the retina was then determined by electroretinography and compared to the spectroradiometric measurements of habitat light encountered by M. atlanticus from juveniles to adults. These data, along with previously known microspectrophotometric measurements of sensitivity in M. atlanticus, indicate retinal spectral sensitivity that matches the dominant wavelengths of environmental light for juvenile and adult fish. For the intervening subadult stage, however, spectral sensitivity does not match the dominant wavelength of light it occupies but better matches the dominant wavelengths of light in the habitat of its forthcoming migration. These results first indicate that the relationship between environmental light spectrum and spectral sensitivity of the retina changes during M. atlanticus development and then suggest that such changes may be programmed to support visual anticipation of new photic environments.
NASA Astrophysics Data System (ADS)
Godbole, Saurabh
Traditionally, textual tools have been utilized to teach basic programming languages and paradigms. Research has shown that students tend to be visual learners. Using flowcharts, students can quickly understand the logic of their programs and visualize the flow of commands in the algorithm. Moreover, applying programming to physical systems through the use of a microcontroller to facilitate this type of learning can spark an interest in students to advance their programming knowledge to create novel applications. This study examined if freshmen college students' attitudes towards programming changed after completing a graphical programming lesson. Various attributes about students' attitudes were examined including confidence, interest, stereotypes, and their belief in the usefulness of acquiring programming skills. The study found that there were no statistically significant differences in attitudes either immediately following the session or after a period of four weeks.
Perception of the dynamic visual vertical during sinusoidal linear motion.
Pomante, A; Selen, L P J; Medendorp, W P
2017-10-01
The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion. Copyright © 2017 the American Physiological Society.
Low-cost USB interface for operant research using Arduino and Visual Basic.
Escobar, Rogelio; Pérez-Herrera, Carlos A
2015-03-01
This note describes the design of a low-cost interface using Arduino microcontroller boards and Visual Basic programming for operant conditioning research. The board executes one program in Arduino programming language that polls the state of the inputs and generates outputs in an operant chamber. This program communicates through a USB port with another program written in Visual Basic 2010 Express Edition running on a laptop, desktop, netbook computer, or even a tablet equipped with Windows operating system. The Visual Basic program controls schedules of reinforcement and records real-time data. A single Arduino board can be used to control a total of 52 inputs/output lines, and multiple Arduino boards can be used to control multiple operant chambers. An external power supply and a series of micro relays are required to control 28-V DC devices commonly used in operant chambers. Instructions for downloading and using the programs to generate simple and concurrent schedules of reinforcement are provided. Testing suggests that the interface is reliable, accurate, and could serve as an inexpensive alternative to commercial equipment. © Society for the Experimental Analysis of Behavior.
Assessing NARCCAP climate model effects using spatial confidence regions
French, Joshua P.; McGinnis, Seth; Schwartzman, Armin
2017-01-01
We assess similarities and differences between model effects for the North American Regional Climate Change Assessment Program (NARCCAP) climate models using varying classes of linear regression models. Specifically, we consider how the average temperature effect differs for the various global and regional climate model combinations, including assessment of possible interaction between the effects of global and regional climate models. We use both pointwise and simultaneous inference procedures to identify regions where global and regional climate model effects differ. We also show conclusively that results from pointwise inference are misleading, and that accounting for multiple comparisons is important for making proper inference. PMID:28936474
Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.
Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas
2016-01-01
While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.
A New Framework for Software Visualization: A Multi-Layer Approach
2006-09-01
primary target is an exploration of the current state of the area so that we can discover the challenges and propose solutions for them. The study ...Small define both areas of study to collectively be a part of Software Visualization. 22 Visual Programming as ’Visual Programming’ (VP) refers to...founded taxonomy, with the proper characteristics, can further investigation in any field of study . A common language or terminology and the existence of
Magnetic Footpoint Velocities: A Combination Of Minimum Energy Fit AndLocal Correlation Tracking
NASA Astrophysics Data System (ADS)
Belur, Ravindra; Longcope, D.
2006-06-01
Many numerical and time dependent MHD simulations of the solar atmosphererequire the underlying velocity fields which should be consistent with theinduction equation. Recently, Longcope (2004) introduced a new techniqueto infer the photospheric velocity field from sequence of vector magnetogramswhich are in agreement with the induction equation. The method, the Minimum Energy Fit (MEF), determines a set of velocities and selects the velocity which is smallest overall flow speed by minimizing an energy functional. The inferred velocity can be further constrained by information aboutthe velocity inferred from other techniques. With this adopted techniquewe would expect that the inferred velocity will be close to the photospheric velocity of magnetic footpoints. Here, we demonstrate that the inferred horizontal velocities from LCT can be used to constrain the MEFvelocities. We also apply this technique to actual vector magnetogramsequences and compare these velocities with velocities from LCT alone.This work is supported by DoD MURI and NSF SHINE programs.
Open-Universe Theory for Bayesian Inference, Decision, and Sensing (OUTBIDS)
2014-01-01
using a novel dynamic programming algorithm [6]. The second allows for tensor data, in which observations at a given time step exhibit...unlimited. 5 We developed a dynamical tensor model that gives far better estimation and system- identification results than the standard vectorization...inference. Third, unlike prior work that learns different pieces of the model independently, use matching between 3D models and 2D views and/or voting
De Freitas, Julian; Alvarez, George A
2018-05-28
To what extent are people's moral judgments susceptible to subtle factors of which they are unaware? Here we show that we can change people's moral judgments outside of their awareness by subtly biasing perceived causality. Specifically, we used subtle visual manipulations to create visual illusions of causality in morally relevant scenarios, and this systematically changed people's moral judgments. After demonstrating the basic effect using simple displays involving an ambiguous car collision that ends up injuring a person (E1), we show that the effect is sensitive on the millisecond timescale to manipulations of task-irrelevant factors that are known to affect perceived causality, including the duration (E2a) and asynchrony (E2b) of specific task-irrelevant contextual factors in the display. We then conceptually replicate the effect using a different paradigm (E3a), and also show that we can eliminate the effect by interfering with motion processing (E3b). Finally, we show that the effect generalizes across different kinds of moral judgments (E3c). Combined, these studies show that obligatory, abstract inferences made by the visual system influence moral judgments. Copyright © 2018 Elsevier B.V. All rights reserved.
Connecting Swath Satellite Data With Imagery in Mapping Applications
NASA Astrophysics Data System (ADS)
Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.
2016-12-01
Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.
Therapeutic Riding for a Student with Multiple Disabilities and Visual Impairment: A Case Study.
ERIC Educational Resources Information Center
Lehrman, Jennifer; Ross, David B.
2001-01-01
A 9-year-old with multiple disabilities and visual impairments was the focus of a 10-week developmental therapeutic riding program incorporating hippotherapy. The program has led to increased mobility, an increase in visual attention span and fixation time, signs of greater verbal communication, and the acquisition of new functional signs.…
ERIC Educational Resources Information Center
Alma, Manna A.; Groothoff, Johan W.; Melis-Dankers, Bart J. M.; Suurmeijer, Theo P. B. M.; van der Mei, Sijrike F.
2013-01-01
Introduction: The pilot study reported here determined the effectiveness of a multidisciplinary group rehabilitation program, Visually Impaired Elderly Persons Participating (VIPP), on psychosocial functioning. Methods: The single-group pretest-posttest pilot study included 29 persons with visual impairments (aged 55 and older) who were referred…
NASA Astrophysics Data System (ADS)
Gao, Zhong-Ke; Cai, Qing; Dong, Na; Zhang, Shan-Shan; Bo, Yun; Zhang, Jie
2016-10-01
Distinguishing brain cognitive behavior underlying disabled and able-bodied subjects constitutes a challenging problem of significant importance. Complex network has established itself as a powerful tool for exploring functional brain networks, which sheds light on the inner workings of the human brain. Most existing works in constructing brain network focus on phase-synchronization measures between regional neural activities. In contrast, we propose a novel approach for inferring functional networks from P300 event-related potentials by integrating time and frequency domain information extracted from each channel signal, which we show to be efficient in subsequent pattern recognition. In particular, we construct brain network by regarding each channel signal as a node and determining the edges in terms of correlation of the extracted feature vectors. A six-choice P300 paradigm with six different images is used in testing our new approach, involving one able-bodied subject and three disabled subjects suffering from multiple sclerosis, cerebral palsy, traumatic brain and spinal-cord injury, respectively. We then exploit global efficiency, local efficiency and small-world indices from the derived brain networks to assess the network topological structure associated with different target images. The findings suggest that our method allows identifying brain cognitive behaviors related to visual stimulus between able-bodied and disabled subjects.
Logic-centered architecture for ubiquitous health monitoring.
Lewandowski, Jacek; Arochena, Hisbel E; Naguib, Raouf N G; Chao, Kuo-Ming; Garcia-Perez, Alexeis
2014-09-01
One of the key points to maintain and boost research and development in the area of smart wearable systems (SWS) is the development of integrated architectures for intelligent services, as well as wearable systems and devices for health and wellness management. This paper presents such a generic architecture for multiparametric, intelligent and ubiquitous wireless sensing platforms. It is a transparent, smartphone-based sensing framework with customizable wireless interfaces and plug'n'play capability to easily interconnect third party sensor devices. It caters to wireless body, personal, and near-me area networks. A pivotal part of the platform is the integrated inference engine/runtime environment that allows the mobile device to serve as a user-adaptable personal health assistant. The novelty of this system lays in a rapid visual development and remote deployment model. The complementary visual Inference Engine Editor that comes with the package enables artificial intelligence specialists, alongside with medical experts, to build data processing models by assembling different components and instantly deploying them (remotely) on patient mobile devices. In this paper, the new logic-centered software architecture for ubiquitous health monitoring applications is described, followed by a discussion as to how it helps to shift focus from software and hardware development, to medical and health process-centered design of new SWS applications.
Tracking the visual focus of attention for a varying number of wandering people.
Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel
2008-07-01
We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.
Thurman, Steven M; Lu, Hongjing
2014-01-01
Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.
Pastore, Vito Paolo; Godjoski, Aleksandar; Martinoia, Sergio; Massobrio, Paolo
2018-01-01
We implemented an automated and efficient open-source software for the analysis of multi-site neuronal spike signals. The software package, named SPICODYN, has been developed as a standalone windows GUI application, using C# programming language with Microsoft Visual Studio based on .NET framework 4.5 development environment. Accepted input data formats are HDF5, level 5 MAT and text files, containing recorded or generated time series spike signals data. SPICODYN processes such electrophysiological signals focusing on: spiking and bursting dynamics and functional-effective connectivity analysis. In particular, for inferring network connectivity, a new implementation of the transfer entropy method is presented dealing with multiple time delays (temporal extension) and with multiple binary patterns (high order extension). SPICODYN is specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing. The optimized implementation of the Delayed Transfer Entropy and the High-Order Transfer Entropy algorithms, allows performing accurate and rapid analysis on multiple spike trains from thousands of electrodes.
Scalable and expressive medical terminologies.
Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J
1996-01-01
The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.
IdentiPy: An Extensible Search Engine for Protein Identification in Shotgun Proteomics.
Levitsky, Lev I; Ivanov, Mark V; Lobas, Anna A; Bubis, Julia A; Tarasova, Irina A; Solovyeva, Elizaveta M; Pridatchenko, Marina L; Gorshkov, Mikhail V
2018-06-18
We present an open-source, extensible search engine for shotgun proteomics. Implemented in Python programming language, IdentiPy shows competitive processing speed and sensitivity compared with the state-of-the-art search engines. It is equipped with a user-friendly web interface, IdentiPy Server, enabling the use of a single server installation accessed from multiple workstations. Using a simplified version of X!Tandem scoring algorithm and its novel "autotune" feature, IdentiPy outperforms the popular alternatives on high-resolution data sets. Autotune adjusts the search parameters for the particular data set, resulting in improved search efficiency and simplifying the user experience. IdentiPy with the autotune feature shows higher sensitivity compared with the evaluated search engines. IdentiPy Server has built-in postprocessing and protein inference procedures and provides graphic visualization of the statistical properties of the data set and the search results. It is open-source and can be freely extended to use third-party scoring functions or processing algorithms and allows customization of the search workflow for specialized applications.
The Center for Computational Biology: resources, achievements, and challenges
Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2011-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains. PMID:22081221
The Center for Computational Biology: resources, achievements, and challenges.
Toga, Arthur W; Dinov, Ivo D; Thompson, Paul M; Woods, Roger P; Van Horn, John D; Shattuck, David W; Parker, D Stott
2012-01-01
The Center for Computational Biology (CCB) is a multidisciplinary program where biomedical scientists, engineers, and clinicians work jointly to combine modern mathematical and computational techniques, to perform phenotypic and genotypic studies of biological structure, function, and physiology in health and disease. CCB has developed a computational framework built around the Manifold Atlas, an integrated biomedical computing environment that enables statistical inference on biological manifolds. These manifolds model biological structures, features, shapes, and flows, and support sophisticated morphometric and statistical analyses. The Manifold Atlas includes tools, workflows, and services for multimodal population-based modeling and analysis of biological manifolds. The broad spectrum of biomedical topics explored by CCB investigators include the study of normal and pathological brain development, maturation and aging, discovery of associations between neuroimaging and genetic biomarkers, and the modeling, analysis, and visualization of biological shape, form, and size. CCB supports a wide range of short-term and long-term collaborations with outside investigators, which drive the center's computational developments and focus the validation and dissemination of CCB resources to new areas and scientific domains.
Genetic network inference as a series of discrimination tasks.
Kimura, Shuhei; Nakayama, Satoshi; Hatakeyama, Mariko
2009-04-01
Genetic network inference methods based on sets of differential equations generally require a great deal of time, as the equations must be solved many times. To reduce the computational cost, researchers have proposed other methods for inferring genetic networks by solving sets of differential equations only a few times, or even without solving them at all. When we try to obtain reasonable network models using these methods, however, we must estimate the time derivatives of the gene expression levels with great precision. In this study, we propose a new method to overcome the drawbacks of inference methods based on sets of differential equations. Our method infers genetic networks by obtaining classifiers capable of predicting the signs of the derivatives of the gene expression levels. For this purpose, we defined a genetic network inference problem as a series of discrimination tasks, then solved the defined series of discrimination tasks with a linear programming machine. Our experimental results demonstrated that the proposed method is capable of correctly inferring genetic networks, and doing so more than 500 times faster than the other inference methods based on sets of differential equations. Next, we applied our method to actual expression data of the bacterial SOS DNA repair system. And finally, we demonstrated that our approach relates to the inference method based on the S-system model. Though our method provides no estimation of the kinetic parameters, it should be useful for researchers interested only in the network structure of a target system. Supplementary data are available at Bioinformatics online.
Contextual modulation of primary visual cortex by auditory signals.
Petro, L S; Paton, A T; Muckli, L
2017-02-19
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Contextual modulation of primary visual cortex by auditory signals
Paton, A. T.
2017-01-01
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015
Bringing Geoscience Research into Undergraduate Education in the Classroom and Online
NASA Astrophysics Data System (ADS)
Reed, D. L.
2008-12-01
The growth of the cyberinfrastructure provides new opportunities for students and instructors to place data- driven, classroom and laboratory exercises in the context of an integrated research project. Undergraduate majors in a classroom section of the applied geophysics course at SJSU use Google Earth to first visualize the geomorphic expression of the Silver Creek fault in the foothills of the eastern Santa Clara Valley in order to identify key research questions regarding the northward projection of the fault beneath the valley floor, near downtown San Jose. The 3-D visualization, both regionally and locally, plays a key element in establishing the overall framework of the research. Students then plan a seismic hazards study in an urban environment, which is the primary focus of the class, using satellite imagery to locate specific stations along a geophysical transect crossing the inferred location of the fault. Geophysical modeling along the transect combines field-based data acquisition by members of the class with regional geophysical data, downloaded from an online USGS database. Students carry out all aspects of the research from project planning, to data acquisition and analysis, report writing, and an oral presentation of the results. In contrast, online courses present special challenges as students may become frustrated navigating complex user interfaces, sometimes employed in research-driven online databases, and not achieve the desired learning outcomes. Consequently, an alternate approach, implemented in an online oceanography course, is for the instructor to first extract research data from online databases, build visualizations, and then place the learning objects in the context of a virtual oceanographic research expedition. Several examples of this approach, to engage students in the experience of oceanographic research, will be presented, including seafloor mapping studies around the Golden Gate and across the major ocean basins, using data obtained in part through the use of the Marine Geoscience Data System and GeoMapApp. Students also locate and undertake submersible dives inside hydrothermal vents using visualizations provided by the OceanExplorer program and New Millennium Observatory of NOAA/PMEL. Other learning activities include participation, at least virtually, in an iron fertilization experiment in the Southern Ocean (SOFeX) and the development of a model of surface circulation using data from the Global Drifter Program and the National Data Buoy Center. One factor contributing to student learning is to establish a research context for the class early on, so that students become engaged in a sense of exploration, testing and discovery.
Lights, Cameras, Pencils! Using Descriptive Video to Enhance Writing
ERIC Educational Resources Information Center
Hoffner, Helen; Baker, Eileen; Quinn, Kathleen Benson
2008-01-01
Students of various ages and abilities can increase their comprehension and build vocabulary with the help of a new technology, Descriptive Video. Descriptive Video (also known as described programming) was developed to give individuals with visual impairments access to visual media such as television programs and films. Described programs,…
ERIC Educational Resources Information Center
Ragan, Janet M.; Ragan, Tillman J.
1982-01-01
Briefly summarizes history of neurolinguistic programing, which set out to model elements and processes of effective communication and to reduce these to formulas that can be taught to others. Potential areas of inquiry for neurolinguistic programers which should be of concern to visual literacists are discussed. (MBR)
A String Search Marketing Application Using Visual Programming
ERIC Educational Resources Information Center
Chin, Jerry M.; Chin, Mary H.; Van Landuyt, Cathryn
2013-01-01
This paper demonstrates the use of programing software that provides the student programmer visual cues to construct the code to a student programming assignment. This method does not disregard or minimize the syntax or required logical constructs. The student can concentrate more on the logic and less on the language itself.
User's guide to the Fault Inferring Nonlinear Detection System (FINDS) computer program
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Godiwala, P. M.; Satz, H. S.
1988-01-01
Described are the operation and internal structure of the computer program FINDS (Fault Inferring Nonlinear Detection System). The FINDS algorithm is designed to provide reliable estimates for aircraft position, velocity, attitude, and horizontal winds to be used for guidance and control laws in the presence of possible failures in the avionics sensors. The FINDS algorithm was developed with the use of a digital simulation of a commercial transport aircraft and tested with flight recorded data. The algorithm was then modified to meet the size constraints and real-time execution requirements on a flight computer. For the real-time operation, a multi-rate implementation of the FINDS algorithm has been partitioned to execute on a dual parallel processor configuration: one based on the translational dynamics and the other on the rotational kinematics. The report presents an overview of the FINDS algorithm, the implemented equations, the flow charts for the key subprograms, the input and output files, program variable indexing convention, subprogram descriptions, and the common block descriptions used in the program.
Kraehenmann, Rainer; Schmidt, André; Friston, Karl; Preller, Katrin H; Seifritz, Erich; Vollenweider, Franz X
2016-01-01
Stimulation of serotonergic neurotransmission by psilocybin has been shown to shift emotional biases away from negative towards positive stimuli. We have recently shown that reduced amygdala activity during threat processing might underlie psilocybin's effect on emotional processing. However, it is still not known whether psilocybin modulates bottom-up or top-down connectivity within the visual-limbic-prefrontal network underlying threat processing. We therefore analyzed our previous fMRI data using dynamic causal modeling and used Bayesian model selection to infer how psilocybin modulated effective connectivity within the visual-limbic-prefrontal network during threat processing. First, both placebo and psilocybin data were best explained by a model in which threat affect modulated bidirectional connections between the primary visual cortex, amygdala, and lateral prefrontal cortex. Second, psilocybin decreased the threat-induced modulation of top-down connectivity from the amygdala to primary visual cortex, speaking to a neural mechanism that might underlie putative shifts towards positive affect states after psilocybin administration. These findings may have important implications for the treatment of mood and anxiety disorders.
Bhavnani, Suresh K.; Chen, Tianlong; Ayyaswamy, Archana; Visweswaran, Shyam; Bellala, Gowtham; Rohit, Divekar; Kevin E., Bassler
2017-01-01
A primary goal of precision medicine is to identify patient subgroups based on their characteristics (e.g., comorbidities or genes) with the goal of designing more targeted interventions. While network visualization methods such as Fruchterman-Reingold have been used to successfully identify such patient subgroups in small to medium sized data sets, they often fail to reveal comprehensible visual patterns in large and dense networks despite having significant clustering. We therefore developed an algorithm called ExplodeLayout, which exploits the existence of significant clusters in bipartite networks to automatically “explode” a traditional network layout with the goal of separating overlapping clusters, while at the same time preserving key network topological properties that are critical for the comprehension of patient subgroups. We demonstrate the utility of ExplodeLayout by visualizing a large dataset extracted from Medicare consisting of readmitted hip-fracture patients and their comorbidities, demonstrate its statistically significant improvement over a traditional layout algorithm, and discuss how the resulting network visualization enabled clinicians to infer mechanisms precipitating hospital readmission in specific patient subgroups. PMID:28815099
svviz: a read viewer for validating structural variants.
Spies, Noah; Zook, Justin M; Salit, Marc; Sidow, Arend
2015-12-15
Visualizing read alignments is the most effective way to validate candidate structural variants (SVs) with existing data. We present svviz, a sequencing read visualizer for SVs that sorts and displays only reads relevant to a candidate SV. svviz works by searching input bam(s) for potentially relevant reads, realigning them against the inferred sequence of the putative variant allele as well as the reference allele and identifying reads that match one allele better than the other. Separate views of the two alleles are then displayed in a scrollable web browser view, enabling a more intuitive visualization of each allele, compared with the single reference genome-based view common to most current read browsers. The browser view facilitates examining the evidence for or against a putative variant, estimating zygosity, visualizing affected genomic annotations and manual refinement of breakpoints. svviz supports data from most modern sequencing platforms. svviz is implemented in python and freely available from http://svviz.github.io/. Published by Oxford University Press 2015. This work is written by US Government employees and is in the public domain in the US.
A Neuro-Oncology Workstation for Structuring, Modeling, and Visualizing Patient Records
Hsu, William; Arnold, Corey W.; Taira, Ricky K.
2016-01-01
The patient medical record contains a wealth of information consisting of prior observations, interpretations, and interventions that need to be interpreted and applied towards decisions regarding current patient care. Given the time constraints and the large—often extraneous—amount of data available, clinicians are tasked with the challenge of performing a comprehensive review of how a disease progresses in individual patients. To facilitate this process, we demonstrate a neuro-oncology workstation that assists in structuring and visualizing medical data to promote an evidence-based approach for understanding a patient’s record. The workstation consists of three components: 1) a structuring tool that incorporates natural language processing to assist with the extraction of problems, findings, and attributes for structuring observations, events, and inferences stated within medical reports; 2) a data modeling tool that provides a comprehensive and consistent representation of concepts for the disease-specific domain; and 3) a visual workbench for visualizing, navigating, and querying the structured data to enable retrieval of relevant portions of the patient record. We discuss this workstation in the context of reviewing cases of glioblastoma multiforme patients. PMID:27583308
A Neuro-Oncology Workstation for Structuring, Modeling, and Visualizing Patient Records.
Hsu, William; Arnold, Corey W; Taira, Ricky K
2010-11-01
The patient medical record contains a wealth of information consisting of prior observations, interpretations, and interventions that need to be interpreted and applied towards decisions regarding current patient care. Given the time constraints and the large-often extraneous-amount of data available, clinicians are tasked with the challenge of performing a comprehensive review of how a disease progresses in individual patients. To facilitate this process, we demonstrate a neuro-oncology workstation that assists in structuring and visualizing medical data to promote an evidence-based approach for understanding a patient's record. The workstation consists of three components: 1) a structuring tool that incorporates natural language processing to assist with the extraction of problems, findings, and attributes for structuring observations, events, and inferences stated within medical reports; 2) a data modeling tool that provides a comprehensive and consistent representation of concepts for the disease-specific domain; and 3) a visual workbench for visualizing, navigating, and querying the structured data to enable retrieval of relevant portions of the patient record. We discuss this workstation in the context of reviewing cases of glioblastoma multiforme patients.
Automation trust and attention allocation in multitasking workspace.
Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke
2018-07-01
Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.
BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.
Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren
2016-01-01
Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.
Inferring ontology graph structures using OWL reasoning.
Rodríguez-García, Miguel Ángel; Hoehndorf, Robert
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Experiences with hypercube operating system instrumentation
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Rudolph, David C.
1989-01-01
The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.
Comparison of Text-Based and Visual-Based Programming Input Methods for First-Time Learners
ERIC Educational Resources Information Center
Saito, Daisuke; Washizaki, Hironori; Fukazawa, Yoshiaki
2017-01-01
Aim/Purpose: When learning to program, both text-based and visual-based input methods are common. However, it is unclear which method is more appropriate for first-time learners (first learners). Background: The differences in the learning effect between text-based and visual-based input methods for first learners are compared the using a…
Cornuet, Jean-Marie; Santos, Filipe; Beaumont, Mark A.; Robert, Christian P.; Marin, Jean-Michel; Balding, David J.; Guillemaud, Thomas; Estoup, Arnaud
2008-01-01
Summary: Genetic data obtained on population samples convey information about their evolutionary history. Inference methods can extract part of this information but they require sophisticated statistical techniques that have been made available to the biologist community (through computer programs) only for simple and standard situations typically involving a small number of samples. We propose here a computer program (DIY ABC) for inference based on approximate Bayesian computation (ABC), in which scenarios can be customized by the user to fit many complex situations involving any number of populations and samples. Such scenarios involve any combination of population divergences, admixtures and population size changes. DIY ABC can be used to compare competing scenarios, estimate parameters for one or more scenarios and compute bias and precision measures for a given scenario and known values of parameters (the current version applies to unlinked microsatellite data). This article describes key methods used in the program and provides its main features. The analysis of one simulated and one real dataset, both with complex evolutionary scenarios, illustrates the main possibilities of DIY ABC. Availability: The software DIY ABC is freely available at http://www.montpellier.inra.fr/CBGP/diyabc. Contact: j.cornuet@imperial.ac.uk Supplementary information: Supplementary data are also available at http://www.montpellier.inra.fr/CBGP/diyabc PMID:18842597
Reed-Jones, Rebecca J; Dorgo, Sandor; Hitchings, Maija K; Bader, Julia O
2012-04-01
This study aimed to examine the effect of visual training on obstacle course performance of independent community dwelling older adults. Agility is the ability to rapidly alter ongoing motor patterns, an important aspect of mobility which is required in obstacle avoidance. However, visual information is also a critical factor in successful obstacle avoidance. We compared obstacle course performance of a group that trained in visually driven body movements and agility drills, to a group that trained only in agility drills. We also included a control group that followed the American College of Sports Medicine exercise recommendations for older adults. Significant gains in fitness, mobility and power were observed across all training groups. Obstacle course performance results revealed that visual training had the greatest improvement on obstacle course performance (22%) following a 12 week training program. These results suggest that visual training may be an important consideration for fall prevention programs. Copyright © 2011 Elsevier B.V. All rights reserved.
The Effects of Regular Exercise Programs for Visually Impaired and Sighted Schoolchildren.
ERIC Educational Resources Information Center
Blessing, D. L.; And Others
1993-01-01
This study examined effects of a 16-week aerobic exercise training program on the cardiovascular fitness and body composition of 30 students with visual impairments. In comparison with traditional physical education provided to sighted students, the exercise training program resulted in a significant increase in cardiovascular fitness and a…
VISUAL and SLOPE: perspective and quantitative representation of digital terrain models.
R.J. McGaughey; R.H. Twito
1988-01-01
Two computer programs to help timber-harvest planners evaluate terrain for logging operations are presented. The first program, VISUAL, produces three-dimensional perspectives of a digital terrain model. The second, SLOPE, produces map-scaled overlays delineating areas of equal slope, aspect, or elevation. Both programs help planners familiarize themselves with new...
ERIC Educational Resources Information Center
Walker, Brad R.; Bozeman, Laura A.
2002-01-01
This article describes a collaborative process that parents, teachers, consumers, and advocacy groups in North Carolina used to successfully establish a permanently funded university training program specializing in visual impairments, the Visual Impairment Training Program. Within this process several factors were identified that contributed to…
ERIC Educational Resources Information Center
Weiss, Charles J.
2017-01-01
The Scientific Computing for Chemists course taught at Wabash College teaches chemistry students to use the Python programming language, Jupyter notebooks, and a number of common Python scientific libraries to process, analyze, and visualize data. Assuming no prior programming experience, the course introduces students to basic programming and…
ERIC Educational Resources Information Center
Lopez-Justicia, Maria D.; Martos, Francisco J.
1999-01-01
This study compared improvements in visual function of 20 Spanish children with low vision, ages 4 to 6 years. Children received either the Barraga and Morris program or the Frostig program, or placebo control or no treatment. No significant differences between treatment groups were found. (DB)
The Need for Motor Development Programs for Visually Impaired Preschoolers.
ERIC Educational Resources Information Center
Palazesi, Margot A.
1986-01-01
The paper advocates the development of movement programs for preschool visually impaired children to compensate for their orientation deficits. The author asserts that skills necessary for acquisition of spatial concepts should be taught through movement programs at an early age in the normal developmental sequence instead of attempting to remedy…
ERIC Educational Resources Information Center
Silberman, R. K.; And Others
1996-01-01
A survey of 69 faculty members from 32 universities offering preparation programs for teachers, orientation and mobility specialists, rehabilitation teachers, and doctoral-level leadership personnel serving people with visual impairments raises concerns about the future viability of such programs, in light of state budget cuts, faculty recruitment…
A Term Project in Visual Basic: The Downhill Snowboard Shop
ERIC Educational Resources Information Center
Simkin, Mark G.
2007-01-01
Most commercial programming applications are considerably more complex than the end-of-chapter exercises found in programming textbooks. This case addresses this problem by requiring the students in entry-level Visual Basic programming classes to create an application that helps users order ski equipment from a retailer. For convenience, the forms…
Yu, Xiaoyu; Reva, Oleg N
2018-01-01
Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA.
Yu, Xiaoyu; Reva, Oleg N
2018-01-01
Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA. PMID:29511354
ERIC Educational Resources Information Center
Alnabhan, Mousa; Alhamdan, Najat; Darwish, Ahmed
2014-01-01
The current study aimed at investigating the effect of the Master Thinker program on developing critical thinking skills of 11th grade students in Bahrain. Specifically, this research attempts to examine the hypothesis: Teaching the Master Thinker program will be significantly effective in developing critical thinking and its skills (inference,…
David L. Peterson; Daniel L. Schmoldt
1999-01-01
The National Park Service and other public agencies are increasing their emphasis on inventory and monitoring (I&M) programs to obtain the information needed to infer changes in resource conditions and trigger management responses.A few individuals on a planning team can develop I&M programs, although a focused workshop is more effective.Workshops are...
C Language Integrated Production System, Ada Version
NASA Technical Reports Server (NTRS)
Culbert, Chris; Riley, Gary; Savely, Robert T.; Melebeck, Clovis J.; White, Wesley A.; Mcgregor, Terry L.; Ferguson, Melisa; Razavipour, Reza
1992-01-01
CLIPS/Ada provides capabilities of CLIPS v4.3 but uses Ada as source language for CLIPS executable code. Implements forward-chaining rule-based language. Program contains inference engine and language syntax providing framework for construction of expert-system program. Also includes features for debugging application program. Based on Rete algorithm which provides efficient method for performing repeated matching of patterns. Written in Ada.
Fault-Tolerant Control For A Robotic Inspection System
NASA Technical Reports Server (NTRS)
Tso, Kam Sing
1995-01-01
Report describes first phase of continuing program of research on fault-tolerant control subsystem of telerobotic visual-inspection system. Goal of program to develop robotic system for remotely controlled visual inspection of structures in outer space.
Visual Environment for Rich Data Interpretation (VERDI) program for environmental modeling systems
VERDI is a flexible, modular, Java-based program used for visualizing multivariate gridded meteorology, emissions and air quality modeling data created by environmental modeling systems such as the CMAQ model and WRF.
Procedurally Mediated Social Inferences: The Case of Category Accessibility Effects.
1984-12-01
New York: Academic. Craik , F. I. M., & Lockhart , R. S. (1972). Levels of processing : A framework for memory research. Journal of Verbal Learning...more "deeply" encoded semantic features (cf. Craik 8 Lockhart , 1972). (A few theorists assume that visual images may also be used as an alternative...semantically rather than phonemically or graphemically ( Craik & Lockhart , 1972). It is this familiar type of declarative memory of which we are usually
ERIC Educational Resources Information Center
Blake, Peter R.; Ganea, Patricia A.; Harris, Paul L.
2012-01-01
Children can identify owners either by seeing a person in possession of an object (a visual cue) and inferring that they are the owner or by hearing testimony about a claim of ownership (a verbal cue). A total of 391 children between 2.5 and 6 years of age were tested in three experiments assessing how children identify owners when these two cues…
The Doctor Is In! Diagnostic Analysis.
Jupiter, Daniel C
To make meaningful inferences based on our regression models, we must ensure that we have met the necessary assumptions of these tests. In this commentary, we review these assumptions and those for the t-test and analysis of variance, and introduce a variety of methods, formal and informal, numeric and visual, for assessing conformity with the assumptions. Copyright © 2018 The American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
GROVER, EDWARD C.; AND OTHERS
THIS STUDY INVESTIGATED THE DECLINING ENROLLMENT IN OHIO'S PROGRAMS FOR PARTIALLY SEEING CHILDREN AND THE PROBLEMS OF INCIDENCE, VISUAL FUNCTIONING, AND MULTIPLE HANDICAPS. PARTIALLY SEEING CHILDREN IDENTIFIED BY THE STUDY HAD A VISUAL ACUITY AFTER CORRECTION OF 20/70 OR LESS AND/OR CORRECTION OF MORE THAN 10 DIOPTERS OF MYOPIA. THE SCHOOL NURSES…
Gogoshin, Grigoriy; Boerwinkle, Eric
2017-01-01
Abstract Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology—type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types—single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc. PMID:27681505
BiKEGG: a COBRA toolbox extension for bridging the BiGG and KEGG databases.
Jamialahmadi, Oveis; Motamedian, Ehsan; Hashemi-Najafabadi, Sameereh
2016-10-18
Development of an interface tool between the Biochemical, Genetic and Genomic (BiGG) and KEGG databases is necessary for simultaneous access to the features of both databases. For this purpose, we present the BiKEGG toolbox, an open source COBRA toolbox extension providing a set of functions to infer the reaction correspondences between the KEGG reaction identifiers and those in the BiGG knowledgebase using a combination of manual verification and computational methods. Inferred reaction correspondences using this approach are supported by evidence from the literature, which provides a higher number of reconciled reactions between these two databases compared to the MetaNetX and MetRxn databases. This set of equivalent reactions is then used to automatically superimpose the predicted fluxes using COBRA methods on classical KEGG pathway maps or to create a customized metabolic map based on the KEGG global metabolic pathway, and to find the corresponding reactions in BiGG based on the genome annotation of an organism in the KEGG database. Customized metabolic maps can be created for a set of pathways of interest, for the whole KEGG global map or exclusively for all pathways for which there exists at least one flux carrying reaction. This flexibility in visualization enables BiKEGG to indicate reaction directionality as well as to visualize the reaction fluxes for different static or dynamic conditions in an animated manner. BiKEGG allows the user to export (1) the output visualized metabolic maps to various standard image formats or save them as a video or animated GIF file, and (2) the equivalent reactions for an organism as an Excel spreadsheet.
Coordinated Optimization of Visual Cortical Maps (I) Symmetry-based Analysis
Reichl, Lars; Heide, Dominik; Löwel, Siegrid; Crowley, Justin C.; Kaschube, Matthias; Wolf, Fred
2012-01-01
In the primary visual cortex of primates and carnivores, functional architecture can be characterized by maps of various stimulus features such as orientation preference (OP), ocular dominance (OD), and spatial frequency. It is a long-standing question in theoretical neuroscience whether the observed maps should be interpreted as optima of a specific energy functional that summarizes the design principles of cortical functional architecture. A rigorous evaluation of this optimization hypothesis is particularly demanded by recent evidence that the functional architecture of orientation columns precisely follows species invariant quantitative laws. Because it would be desirable to infer the form of such an optimization principle from the biological data, the optimization approach to explain cortical functional architecture raises the following questions: i) What are the genuine ground states of candidate energy functionals and how can they be calculated with precision and rigor? ii) How do differences in candidate optimization principles impact on the predicted map structure and conversely what can be learned about a hypothetical underlying optimization principle from observations on map structure? iii) Is there a way to analyze the coordinated organization of cortical maps predicted by optimization principles in general? To answer these questions we developed a general dynamical systems approach to the combined optimization of visual cortical maps of OP and another scalar feature such as OD or spatial frequency preference. From basic symmetry assumptions we obtain a comprehensive phenomenological classification of possible inter-map coupling energies and examine representative examples. We show that each individual coupling energy leads to a different class of OP solutions with different correlations among the maps such that inferences about the optimization principle from map layout appear viable. We systematically assess whether quantitative laws resembling experimental observations can result from the coordinated optimization of orientation columns with other feature maps. PMID:23144599
Gogoshin, Grigoriy; Boerwinkle, Eric; Rodin, Andrei S
2017-04-01
Bayesian network (BN) reconstruction is a prototypical systems biology data analysis approach that has been successfully used to reverse engineer and model networks reflecting different layers of biological organization (ranging from genetic to epigenetic to cellular pathway to metabolomic). It is especially relevant in the context of modern (ongoing and prospective) studies that generate heterogeneous high-throughput omics datasets. However, there are both theoretical and practical obstacles to the seamless application of BN modeling to such big data, including computational inefficiency of optimal BN structure search algorithms, ambiguity in data discretization, mixing data types, imputation and validation, and, in general, limited scalability in both reconstruction and visualization of BNs. To overcome these and other obstacles, we present BNOmics, an improved algorithm and software toolkit for inferring and analyzing BNs from omics datasets. BNOmics aims at comprehensive systems biology-type data exploration, including both generating new biological hypothesis and testing and validating the existing ones. Novel aspects of the algorithm center around increasing scalability and applicability to varying data types (with different explicit and implicit distributional assumptions) within the same analysis framework. An output and visualization interface to widely available graph-rendering software is also included. Three diverse applications are detailed. BNOmics was originally developed in the context of genetic epidemiology data and is being continuously optimized to keep pace with the ever-increasing inflow of available large-scale omics datasets. As such, the software scalability and usability on the less than exotic computer hardware are a priority, as well as the applicability of the algorithm and software to the heterogeneous datasets containing many data types-single-nucleotide polymorphisms and other genetic/epigenetic/transcriptome variables, metabolite levels, epidemiological variables, endpoints, and phenotypes, etc.
Papatheodorou, Irene; Ziehm, Matthias; Wieser, Daniela; Alic, Nazif; Partridge, Linda; Thornton, Janet M.
2012-01-01
A challenge of systems biology is to integrate incomplete knowledge on pathways with existing experimental data sets and relate these to measured phenotypes. Research on ageing often generates such incomplete data, creating difficulties in integrating RNA expression with information about biological processes and the phenotypes of ageing, including longevity. Here, we develop a logic-based method that employs Answer Set Programming, and use it to infer signalling effects of genetic perturbations, based on a model of the insulin signalling pathway. We apply our method to RNA expression data from Drosophila mutants in the insulin pathway that alter lifespan, in a foxo dependent fashion. We use this information to deduce how the pathway influences lifespan in the mutant animals. We also develop a method for inferring the largest common sub-paths within each of our signalling predictions. Our comparisons reveal consistent homeostatic mechanisms across both long- and short-lived mutants. The transcriptional changes observed in each mutation usually provide negative feedback to signalling predicted for that mutation. We also identify an S6K-mediated feedback in two long-lived mutants that suggests a crosstalk between these pathways in mutants of the insulin pathway, in vivo. By formulating the problem as a logic-based theory in a qualitative fashion, we are able to use the efficient search facilities of Answer Set Programming, allowing us to explore larger pathways, combine molecular changes with pathways and phenotype and infer effects on signalling in in vivo, whole-organism, mutants, where direct signalling stimulation assays are difficult to perform. Our methods are available in the web-service NetEffects: http://www.ebi.ac.uk/thornton-srv/software/NetEffects. PMID:23251396
Papatheodorou, Irene; Ziehm, Matthias; Wieser, Daniela; Alic, Nazif; Partridge, Linda; Thornton, Janet M
2012-01-01
A challenge of systems biology is to integrate incomplete knowledge on pathways with existing experimental data sets and relate these to measured phenotypes. Research on ageing often generates such incomplete data, creating difficulties in integrating RNA expression with information about biological processes and the phenotypes of ageing, including longevity. Here, we develop a logic-based method that employs Answer Set Programming, and use it to infer signalling effects of genetic perturbations, based on a model of the insulin signalling pathway. We apply our method to RNA expression data from Drosophila mutants in the insulin pathway that alter lifespan, in a foxo dependent fashion. We use this information to deduce how the pathway influences lifespan in the mutant animals. We also develop a method for inferring the largest common sub-paths within each of our signalling predictions. Our comparisons reveal consistent homeostatic mechanisms across both long- and short-lived mutants. The transcriptional changes observed in each mutation usually provide negative feedback to signalling predicted for that mutation. We also identify an S6K-mediated feedback in two long-lived mutants that suggests a crosstalk between these pathways in mutants of the insulin pathway, in vivo. By formulating the problem as a logic-based theory in a qualitative fashion, we are able to use the efficient search facilities of Answer Set Programming, allowing us to explore larger pathways, combine molecular changes with pathways and phenotype and infer effects on signalling in in vivo, whole-organism, mutants, where direct signalling stimulation assays are difficult to perform. Our methods are available in the web-service NetEffects: http://www.ebi.ac.uk/thornton-srv/software/NetEffects.
Algorithms for database-dependent search of MS/MS data.
Matthiesen, Rune
2013-01-01
The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.
Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A
2012-01-01
Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.
Ayres, Daniel L.; Darling, Aaron; Zwickl, Derrick J.; Beerli, Peter; Holder, Mark T.; Lewis, Paul O.; Huelsenbeck, John P.; Ronquist, Fredrik; Swofford, David L.; Cummings, Michael P.; Rambaut, Andrew; Suchard, Marc A.
2012-01-01
Abstract Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software. PMID:21963610
SimITK: visual programming of the ITK image-processing library within Simulink.
Dickinson, Andrew W L; Abolmaesumi, Purang; Gobbi, David G; Mousavi, Parvin
2014-04-01
The Insight Segmentation and Registration Toolkit (ITK) is a software library used for image analysis, visualization, and image-guided surgery applications. ITK is a collection of C++ classes that poses the challenge of a steep learning curve should the user not have appropriate C++ programming experience. To remove the programming complexities and facilitate rapid prototyping, an implementation of ITK within a higher-level visual programming environment is presented: SimITK. ITK functionalities are automatically wrapped into "blocks" within Simulink, the visual programming environment of MATLAB, where these blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. The heavily templated C++ nature of ITK does not facilitate direct interaction between Simulink and ITK; an intermediary is required to convert respective data types and allow intercommunication. As such, a SimITK "Virtual Block" has been developed that serves as a wrapper around an ITK class which is capable of resolving the ITK data types to native Simulink data types. Part of the challenge surrounding this implementation involves automatically capturing and storing the pertinent class information that need to be refined from an initial state prior to being reflected within the final block representation. The primary result from the SimITK wrapping procedure is multiple Simulink block libraries. From these libraries, blocks are selected and interconnected to demonstrate two examples: a 3D segmentation workflow and a 3D multimodal registration workflow. Compared to their pure-code equivalents, the workflows highlight ITK usability through an alternative visual interpretation of the code that abstracts away potentially confusing technicalities.
Functional vision in children with perinatal brain damage.
Alimović, Sonja; Jurić, Nikolina; Bošnjak, Vlatka Mejaški
2014-09-01
Many authors have discussed the effects of visual stimulations on visual functions, but there is no research about the effects on using vision in everyday activities (i.e. functional vision). Children with perinatal brain damage can develop cerebral visual impairment with preserved visual functions (e.g. visual acuity, contrast sensitivity) but poor functional vision. Our aim was to discuss the importance of assessing and stimulating functional vision in children with perinatal brain damage. We assessed visual functions (grating visual acuity, contrast sensitivity) and functional vision (the ability of maintaining visual attention and using vision in communication) in 99 children with perinatal brain damage and visual impairment. All children were assessed before and after the visual stimulation program. Our first assessment results showed that children with perinatal brain damage had significantly more problems in functional vision than in basic visual functions. During the visual stimulation program both variables of functional vision and contrast sensitivity improved significantly, while grating acuity improved only in 2.7% of children. We also found that improvement of visual attention significantly correlated to improvement on all other functions describing vision. Therefore, functional vision assessment, especially assessment of visual attention is indispensable in early monitoring of child with perinatal brain damage.
BlueJ Visual Debugger for Learning the Execution of Object-Oriented Programs?
ERIC Educational Resources Information Center
Bennedsen, Jens; Schulte, Carsten
2010-01-01
This article reports on an experiment undertaken in order to evaluate the effect of a program visualization tool for helping students to better understand the dynamics of object-oriented programs. The concrete tool used was BlueJ's debugger and object inspector. The study was done as a control-group experiment in an introductory programming…
ERIC Educational Resources Information Center
Sengupta, Pratim; Farris, Amy Voss; Wright, Mason
2012-01-01
Novice learners find motion as a continuous process of change challenging to understand. In this paper, we present a pedagogical approach based on agent-based, visual programming to address this issue. Integrating agent-based programming, in particular, Logo programming, with curricular science has been shown to be challenging in previous research…
Self-Study and Evaluation Guide/1979 Edition. Section D-16: Other Service Program.
ERIC Educational Resources Information Center
National Accreditation Council for Agencies Serving the Blind and Visually Handicapped, New York, NY.
The self evaluation guide is explained to be designed for accreditation of services to blind and visually handicapped students in service programs for which the NAC (National Accreditation Council for Agencies Serving the Blind and Visually Handicapped) does not have specific program standards (such as radio reading services and library services).…
ERIC Educational Resources Information Center
Wielgosz, Meg; Molyneux, Paul
2015-01-01
Students learning English as an additional language (EAL) in Australian schools frequently struggle with the cultural and linguistic demands of the classroom while concurrently grappling with issues of identity and belonging. This article reports on an investigation of the role primary school visual arts programs, distinct programs with a…
Programming Education with a Blocks-Based Visual Language for Mobile Application Development
ERIC Educational Resources Information Center
Mihci, Can; Ozdener, Nesrin
2014-01-01
The aim of this study is to assess the impact upon academic success of the use of a reference block-based visual programming tool, namely the MIT App Inventor for Android, as an educational instrument for teaching object-oriented GUI-application development (CS2) concepts to students; who have previously completed a fundamental programming course…
ERIC Educational Resources Information Center
Vosinakis, Spyros; Anastassakis, George; Koutsabasis, Panayiotis
2018-01-01
Logic Programming (LP) follows the declarative programming paradigm, which novice students often find hard to grasp. The limited availability of visual teaching aids for LP can lead to low motivation for learning. In this paper, we present a platform for teaching and learning Prolog in Virtual Worlds, which enables the visual interpretation and…
The Application of Logic Programming to Communication Education.
ERIC Educational Resources Information Center
Sanford, David L.
Recommending that communication students be required to learn to use computers not merely as number crunchers, word processors, data bases, and graphics generators, but also as logical inference makers, this paper examines the recently developed technology of logical programing in computer languages. It presents two syllogisms and shows how they…
Inference and Discovery in an Exploratory Laboratory. Technical Report No. 10.
ERIC Educational Resources Information Center
Shute, Valerie; And Others
This paper describes the results of a study done as part of a research program investigating the use of computer-based laboratories to support self-paced discovery learning in related to microeconomics, electricity, and light refraction. Program objectives include maximizing the laboratories' effectiveness in helping students learn content…
Literacy, Language and Social Interaction in Special Schools
ERIC Educational Resources Information Center
Reichenberg, Monica
2015-01-01
The present study is a follow up study to a quantitative intervention study where two intervention programs, Reciprocal Teaching and Inference Training, were practiced. This study aims at capturing the potentials benefits and qualitative aspects of one of the programs evaluated, Reciprocal Teaching. More specifically, I have investigated the video…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glass, Samuel W.; Fifield, Leonard S.; Hartman, Trenton S.
This Pacific Northwest National Laboratory (PNNL) milestone report describes progress to date on the investigation of nondestructive test (NDE) methods focusing particularly on local measurements that provide key indicators of cable aging and damage. The work includes a review of relevant literature as well as hands-on experimental verification of inspection capabilities. As NPPs consider applying for second, or subsequent, license renewal (SLR) to extend their operating period from 60 years to 80 years, it important to understand how the materials installed in plant systems and components will age during that time and develop aging management programs (AMPs) to assure continuedmore » safe operation under normal and design basis events (DBE). Normal component and system tests typically confirm the cables can perform their normal operational function. The focus of the cable test program is directed toward the more demanding challenge of assuring the cable function under accident or DBE. Most utilities already have a program associated with their first life extension from 40 to 60 years. Regrettably, there is neither a clear guideline nor a single NDE that can assure cable function and integrity for all cables. Thankfully, however, practical implementation of a broad range of tests allows utilities to develop a practical program that assures cable function to a high degree. The industry has adopted 50% elongation at break (EAB) relative to the un-aged cable condition as the acceptability standard. All tests are benchmarked against the cable EAB test. EAB is a destructive test so the test programs must apply an array of other NDE tests to assure or infer the overall set of cable’s system integrity. These cable NDE programs vary in rigor and methodology. As the industry gains experience with the efficacy of these programs, it is expected that implementation practice will converge to a more common approach. This report addresses the range of local NDE cable tests that are or could be practically implemented in a field test situation. These tests include: visual, infrared thermography, interdigital capacitance, indenter, relaxation time indenter, dynamic mechanical analyzer, infrared/near-infrared spectrometry, ultrasound, and distributed fiber optic temperature measurement.« less
Efficient reordering of PROLOG programs
NASA Technical Reports Server (NTRS)
Gooley, Markian M.; Wah, Benjamin W.
1989-01-01
PROLOG programs are often inefficient: execution corresponds to a depth-first traversal of an AND/OR graph; traversing subgraphs in another order can be less expensive. It is shown how the reordering of clauses within PROLOG predicates, and especially of goals within clauses, can prevent unnecessary search. The characterization and detection of restrictions on reordering is discussed. A system of calling modes for PROLOG, geared to reordering, is proposed, and ways to infer them automatically are discussed. The information needed for safe reordering is summarized, and which types can be inferred automatically and which must be provided by the user are considered. An improved method for determining a good order for the goals of PROLOG clauses is presented and used as the basis for a reordering system.
Bayesian analysis of non-homogeneous Markov chains: application to mental health data.
Sung, Minje; Soyer, Refik; Nhan, Nguyen
2007-07-10
In this paper we present a formal treatment of non-homogeneous Markov chains by introducing a hierarchical Bayesian framework. Our work is motivated by the analysis of correlated categorical data which arise in assessment of psychiatric treatment programs. In our development, we introduce a Markovian structure to describe the non-homogeneity of transition patterns. In doing so, we introduce a logistic regression set-up for Markov chains and incorporate covariates in our model. We present a Bayesian model using Markov chain Monte Carlo methods and develop inference procedures to address issues encountered in the analyses of data from psychiatric treatment programs. Our model and inference procedures are implemented to some real data from a psychiatric treatment study. Copyright 2006 John Wiley & Sons, Ltd.
Psychological Education for Visually Impaired Children.
ERIC Educational Resources Information Center
Locke, Don C.; Gerler, Edwin R., Jr.
1979-01-01
The study investigated the effects of two psychological education programs (Developing Understanding of Self and Others--DUSO, and Human Development Program--HDP or Magic Circle) on the affective growth of 42 visually impaired children in grades kindergarten through 3. (Author/SBH)
Gilaie-Dotan, Sharon; Doron, Ravid
2017-06-01
Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Stimulation of functional vision in children with perinatal brain damage.
Alimović, Sonja; Mejaski-Bosnjak, Vlatka
2011-01-01
Cerebral visual impairment (CVI) is one of the most common causes of bilateral visual loss, which frequently occurs due to perinatal brain injury. Vision in early life has great impact on acquisition of basic comprehensions which are fundamental for further development. Therefore, early detection of visual problems and early intervention is necessary. The aim of the present study is to determine specific visual functioning of children with perinatal brain damage and the influence of visual stimulation on development of functional vision at early age of life. We initially assessed 30 children with perinatal brain damage up to 3 years of age, who were reffered to our pediatric low vision cabinet in "Little house" from child neurologists, ophthalmologists Type and degree of visual impairment was determined according to functional vision assessment of each child. On the bases of those assessments different kind of visual stimulations were carried out with children who have been identified to have a certain visual impairment. Through visual stimulation program some of the children were stimulated with light stimulus, some with different materials under the ultraviolet (UV) light, and some with bright color and high contrast materials. Children were also involved in program of early stimulation of overall sensory motor development. Goals and methods of therapy were determined individually, based on observation of child's possibilities and need. After one year of program, reassessment was done. Results for visual functions and functional vision were compared to evaluate the improvement of the vision development. These results have shown that there was significant improvement in functional vision, especially in visual attention and visual communication.
Kim, Eun Hwi; Suh, Soon Rim
2017-06-01
This study was conducted to verify the effects of a memory and visual-motor integration program for older adults based on self-efficacy theory. A non-equivalent control group pretest-posttest design was implemented in this quasi-experimental study. The participants were 62 older adults from senior centers and older adult welfare facilities in D and G city (Experimental group=30, Control group=32). The experimental group took part in a 12-session memory and visual-motor integration program over 6 weeks. Data regarding memory self-efficacy, memory, visual-motor integration, and depression were collected from July to October of 2014 and analyzed with independent t-test and Mann-Whitney U test using PASW Statistics (SPSS) 18.0 to determine the effects of the interventions. Memory self-efficacy (t=2.20, p=.031), memory (Z=-2.92, p=.004), and visual-motor integration (Z=-2.49, p=.013) increased significantly in the experimental group as compared to the control group. However, depression (Z=-0.90, p=.367) did not decrease significantly. This program is effective for increasing memory, visual-motor integration, and memory self-efficacy in older adults. Therefore, it can be used to improve cognition and prevent dementia in older adults. © 2017 Korean Society of Nursing Science
Tsai, Li-Ting; Hsu, Jung-Lung; Wu, Chien-Te; Chen, Chia-Ching; Su, Yu-Chin
2016-01-01
The purpose of this study was to investigate the effectiveness of visual rehabilitation of a computer-based visual stimulation (VS) program combining checkerboard pattern reversal (passive stimulation) with oddball stimuli (attentional modulation) for improving the visual acuity (VA) of visually impaired (VI) children and children with amblyopia and additional developmental problems. Six children (three females, three males; mean age = 3.9 ± 2.3 years) with impaired VA caused by deficits along the anterior and/or posterior visual pathways were recruited. Participants received eight rounds of VS training (two rounds per week) of at least eight sessions per round. Each session consisted of stimulation with 200 or 300 pattern reversals. Assessments of VA (assessed with the Lea symbol VA test or Teller VA cards), visual evoked potential (VEP), and functional vision (assessed with the Chinese-version Functional Vision Questionnaire, FVQ) were carried out before and after the VS program. Significant gains in VA were found after the VS training [VA = 1.05 logMAR ± 0.80 to 0.61 logMAR ± 0.53, Z = -2.20, asymptotic significance (2-tailed) = 0.028]. No significant changes were observed in the FVQ assessment [92.8 ± 12.6 to 100.8 ±SD = 15.4, Z = -1.46, asymptotic significance (2-tailed) = 0.144]. VEP measurement showed improvement in P100 latency and amplitude or integration of the waveform in two participants. Our results indicate that a computer-based VS program with passive checkerboard stimulation, oddball stimulus design, and interesting auditory feedback could be considered as a potential intervention option to improve the VA of a wide age range of VI children and children with impaired VA combined with other neurological disorders.
Tsai, Li-Ting; Hsu, Jung-Lung; Wu, Chien-Te; Chen, Chia-Ching; Su, Yu-Chin
2016-01-01
The purpose of this study was to investigate the effectiveness of visual rehabilitation of a computer-based visual stimulation (VS) program combining checkerboard pattern reversal (passive stimulation) with oddball stimuli (attentional modulation) for improving the visual acuity (VA) of visually impaired (VI) children and children with amblyopia and additional developmental problems. Six children (three females, three males; mean age = 3.9 ± 2.3 years) with impaired VA caused by deficits along the anterior and/or posterior visual pathways were recruited. Participants received eight rounds of VS training (two rounds per week) of at least eight sessions per round. Each session consisted of stimulation with 200 or 300 pattern reversals. Assessments of VA (assessed with the Lea symbol VA test or Teller VA cards), visual evoked potential (VEP), and functional vision (assessed with the Chinese-version Functional Vision Questionnaire, FVQ) were carried out before and after the VS program. Significant gains in VA were found after the VS training [VA = 1.05 logMAR ± 0.80 to 0.61 logMAR ± 0.53, Z = –2.20, asymptotic significance (2-tailed) = 0.028]. No significant changes were observed in the FVQ assessment [92.8 ± 12.6 to 100.8 ±SD = 15.4, Z = –1.46, asymptotic significance (2-tailed) = 0.144]. VEP measurement showed improvement in P100 latency and amplitude or integration of the waveform in two participants. Our results indicate that a computer-based VS program with passive checkerboard stimulation, oddball stimulus design, and interesting auditory feedback could be considered as a potential intervention option to improve the VA of a wide age range of VI children and children with impaired VA combined with other neurological disorders. PMID:27148014
Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias
2011-03-21
Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features†consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Gravity as a Strong Prior: Implications for Perception and Action.
Jörges, Björn; López-Moliner, Joan
2017-01-01
In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality), or visually and bodily (space travel). As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called "strong prior". As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.
Saunders, Jeffrey A.
2014-01-01
Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194
Bayesian quantification of sensory reweighting in a familial bilateral vestibular disorder (DFNA9).
Alberts, Bart B G T; Selen, Luc P J; Verhagen, Wim I M; Pennings, Ronald J E; Medendorp, W Pieter
2018-03-01
DFNA9 is a rare progressive autosomal dominantly inherited vestibulo-cochlear disorder, resulting in a homogeneous group of patients with hearing impairment and bilateral vestibular function loss. These patients suffer from a deteriorated sense of spatial orientation, leading to balance problems in darkness, especially on irregular surfaces. Both behavioral and functional imaging studies suggest that the remaining sensory cues could compensate for the loss of vestibular information. A thorough model-based quantification of this reweighting in individual patients is, however, missing. Here we psychometrically examined the individual patient's sensory reweighting of these cues after complete vestibular loss. We asked a group of DFNA9 patients and healthy control subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a rod presented within an oriented square frame (rod-in-frame task) in three different head-on-body tilt conditions. Our results show a cyclical frame-induced bias in perceived gravity direction across a 90° range of frame orientations. The magnitude of this bias was significantly increased in the patients compared with the healthy control subjects. Response variability, which increased with head-on-body tilt, was also larger for the patients. Reverse engineering of the underlying signal properties, using Bayesian inference principles, suggests a reweighting of sensory signals, with an increase in visual weight of 20-40% in the patients. Our approach of combining psychophysics and Bayesian reverse engineering is the first to quantify the weights associated with the different sensory modalities at an individual patient level, which could make it possible to develop personal rehabilitation programs based on the patient's sensory weight distribution. NEW & NOTEWORTHY It has been suggested that patients with vestibular deficits can compensate for this loss by increasing reliance on other sensory cues, although an actual quantification of this reweighting is lacking. We combine experimental psychophysics with a reverse engineering approach based on Bayesian inference principles to quantify sensory reweighting in individual vestibular patients. We discuss the suitability of this approach for developing personal rehabilitation programs based on the patient's sensory weight distribution.
Negro, Juan J.; Finlayson, Clive; Galván, Ismael
2018-01-01
Paleo-colour scientists have recently made the transition from describing melanin-based colouration in fossil specimens to inferring life-history traits of the species involved. Two such cases correspond to counter-shaded dinosaurs: dark-coloured due to melanins dorsally, and light-coloured ventrally. We believe that colour reconstruction of fossils based on the shape of preserved microstructures—the majority of paleo-colour studies involve melanin granules—is not without risks. In addition, animals with contrasting dorso-ventral colouration may be under different selection pressures beyond the need for camouflage, including, for instance, visual communication or ultraviolet (UV) protection. Melanin production is costly, and animals may invest less in areas of the integument where pigments are less needed. In addition, melanocytes exposed to UV radiation produce more melanin than unexposed melanocytes. Pigment economization may thus explain the colour pattern of some counter-shaded animals, including extinct species. Even in well-studied extant species, their diversity of hues and patterns is far from being understood; inferring colours and their functions in species only known from one or few specimens from the fossil record should be exerted with special prudence. PMID:29360744
Personalized microbial network inference via co-regularized spectral clustering.
Imangaliyev, Sultan; Keijser, Bart; Crielaard, Wim; Tsivtsivadze, Evgeni
2015-07-15
We use Human Microbiome Project (HMP) cohort (Peterson et al., 2009) to infer personalized oral microbial networks of healthy individuals. To determine clustering of individuals with similar microbial profiles, co-regularized spectral clustering algorithm is applied to the dataset. For each cluster we discovered, we compute co-occurrence relationships among the microbial species that determine microbial network per cluster of individuals. The results of our study suggest that there are several differences in microbial interactions on personalized network level in healthy oral samples acquired from various niches. Based on the results of co-regularized spectral clustering we discover two groups of individuals with different topology of their microbial interaction network. The results of microbial network inference suggest that niche-wise interactions are different in these two groups. Our study shows that healthy individuals have different microbial clusters according to their oral microbiota. Such personalized microbial networks open a better understanding of the microbial ecology of healthy oral cavities and new possibilities for future targeted medication. The scripts written in scientific Python and in Matlab, which were used for network visualization, are provided for download on the website http://learning-machines.com/. Copyright © 2015 Elsevier Inc. All rights reserved.
Strick, Madelijn; de Bruin, Hanka L; de Ruiter, Linde C; Jonkers, Wouter
2015-03-01
Three experiments among university students (N = 372) investigated the persuasive power of moving (i.e., intensely emotional and "chills"-evoking) music in audio-visual advertising. Although advertisers typically aim to increase elaborate processing of the message, these studies illustrate that the persuasive effect of moving music is based on increased narrative transportation ("getting lost" in the ad's story), which reduces critical processing. In Experiment 1, moving music increased transportation and some behavioral intentions (e.g., to donate money). Experiment 2 experimentally increased the salience of manipulative intent of the advertiser, and showed that moving music reduces inferences of manipulative intent, leading in turn to increased behavioral intentions. Experiment 3 tested boundary effects, and showed that moving music fails to increase behavioral intentions when the salience of manipulative intent is either extremely high (which precludes transportation) or extremely low (which precludes reduction of inferences of manipulative intent). Moving music did not increase memory performance, beliefs, and explicit attitudes, suggesting that the influence is affect-based rather cognition-based. Together, these studies illustrate that moving music reduces inferences of manipulation and increases behavioral intentions by transporting viewers into the story of the ad. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Distribution of Plasmoids in Post-Coronal Mass Ejection Current Sheets
NASA Astrophysics Data System (ADS)
Bhattacharjee, A.; Guo, L.; Huang, Y.
2013-12-01
Recently, the fragmentation of a current sheet in the high-Lundquist-number regime caused by the plasmoid instability has been proposed as a possible mechanism for fast reconnection. In this work, we investigate this scenario by comparing the distribution of plasmoids obtained from Large Angle and Spectrometric Coronagraph (LASCO) observational data of a coronal mass ejection event with a resistive magnetohydrodynamic simulation of a similar event. The LASCO/C2 data are analyzed using visual inspection, whereas the numerical data are analyzed using both visual inspection and a more precise topological method. Contrasting the observational data with numerical data analyzed with both methods, we identify a major limitation of the visual inspection method, due to the difficulty in resolving smaller plasmoids. This result raises questions about reports of log-normal distributions of plasmoids and other coherent features in the recent literature. Based on nonlinear scaling relations of the plasmoid instability, we infer a lower bound on the current sheet width, assuming the underlying mechanism of current sheet broadening is resistive diffusion.
Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing
2018-04-26
One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.
Sequential sensory and decision processing in posterior parietal cortex
Ibos, Guilhem; Freedman, David J
2017-01-01
Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for). DOI: http://dx.doi.org/10.7554/eLife.23743.001 PMID:28418332
GAC: Gene Associations with Clinical, a web based application.
Zhang, Xinyan; Rupji, Manali; Kowalski, Jeanne
2017-01-01
We present GAC, a shiny R based tool for interactive visualization of clinical associations based on high-dimensional data. The tool provides a web-based suite to perform supervised principal component analysis (SuperPC), an approach that uses both high-dimensional data, such as gene expression, combined with clinical data to infer clinical associations. We extended the approach to address binary outcomes, in addition to continuous and time-to-event data in our package, thereby increasing the use and flexibility of SuperPC. Additionally, the tool provides an interactive visualization for summarizing results based on a forest plot for both binary and time-to-event data. In summary, the GAC suite of tools provide a one stop shop for conducting statistical analysis to identify and visualize the association between a clinical outcome of interest and high-dimensional data types, such as genomic data. Our GAC package has been implemented in R and is available via http://shinygispa.winship.emory.edu/GAC/. The developmental repository is available at https://github.com/manalirupji/GAC.
Etchemendy, Pablo E; Spiousas, Ignacio; Vergara, Ramiro
2018-01-01
In a recently published work by our group [ Scientific Reports, 7, 7189 (2017)], we performed experiments of visual distance perception in two dark rooms with extremely different reverberation times: one anechoic ( T ∼ 0.12 s) and the other reverberant ( T ∼ 4 s). The perceived distance of the targets was systematically greater in the reverberant room when contrasted to the anechoic chamber. Participants also provided auditorily perceived room-size ratings which were greater for the reverberant room. Our hypothesis was that distance estimates are affected by room size, resulting in farther responses for the room perceived larger. Of much importance to the task was the subjects' ability to infer room size from reverberation. In this article, we report a postanalysis showing that participants having musical expertise were better able to extract and translate reverberation cues into room-size information than nonmusicians. However, the degree to which musical expertise affects visual distance estimates remains unclear.
Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.
Endert, A; Fiaux, P; North, C
2012-12-01
Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.
Interactive entity resolution in relational data: a visual analytic tool and its evaluation.
Kang, Hyunmo; Getoor, Lise; Shneiderman, Ben; Bilgic, Mustafa; Licamele, Louis
2008-01-01
Databases often contain uncertain and imprecise references to real-world entities. Entity resolution, the process of reconciling multiple references to underlying real-world entities, is an important data cleaning process required before accurate visualization or analysis of the data is possible. In many cases, in addition to noisy data describing entities, there is data describing the relationships among the entities. This relational data is important during the entity resolution process; it is useful both for the algorithms which determine likely database references to be resolved and for visual analytic tools which support the entity resolution process. In this paper, we introduce a novel user interface, D-Dupe, for interactive entity resolution in relational data. D-Dupe effectively combines relational entity resolution algorithms with a novel network visualization that enables users to make use of an entity's relational context for making resolution decisions. Since resolution decisions often are interdependent, D-Dupe facilitates understanding this complex process through animations which highlight combined inferences and a history mechanism which allows users to inspect chains of resolution decisions. An empirical study with 12 users confirmed the benefits of the relational context visualization on the performance of entity resolution tasks in relational data in terms of time as well as users' confidence and satisfaction.
Audio-Visual Perception System for a Humanoid Robotic Head
Viciana-Abad, Raquel; Marfil, Rebeca; Perez-Lorenzo, Jose M.; Bandera, Juan P.; Romero-Garces, Adrian; Reche-Lopez, Pedro
2014-01-01
One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. PMID:24878593
Zhao, Jian; Glueck, Michael; Breslav, Simon; Chevalier, Fanny; Khan, Azam
2017-01-01
User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate insights between analysts. We present annotation graphs, a dynamic graph visualization that enables meta-analysis of data based on user-authored annotations. The annotation graph topology encodes annotation semantics, which describe the content of and relations between data selections, comments, and tags. We present a mixed-initiative approach to graph layout that integrates an analyst's manual manipulations with an automatic method based on similarity inferred from the annotation semantics. Various visual graph layout styles reveal different perspectives on the annotation semantics. Annotation graphs are implemented within C8, a system that supports authoring annotations during exploratory analysis of a dataset. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of visually extending user-authored annotations to data meta-analysis for discovery and organization of ideas.
Think spatial: the representation in mental rotation is nonvisual.
Liesefeld, Heinrich R; Zimmer, Hubert D
2013-01-01
For mental rotation, introspection, theories, and interpretations of experimental results imply a certain type of mental representation, namely, visual mental images. Characteristics of the rotated representation can be examined by measuring the influence of stimulus characteristics on rotational speed. If the amount of a given type of information influences rotational speed, one can infer that it was contained in the rotated representation. In Experiment 1, rotational speed of university students (10 men, 11 women) was found to be influenced exclusively by the amount of represented orientation-dependent spatial-relational information but not by orientation-independent spatial-relational information, visual complexity, or the number of stimulus parts. As information in mental-rotation tasks is initially presented visually, this finding implies that at some point during each trial, orientation-dependent information is extracted from visual information. Searching for more direct evidence for this extraction, we recorded the EEG of another sample of university students (12 men, 12 women) during mental rotation of the same stimuli. In an early time window, the observed working memory load-dependent slow potentials were sensitive to the stimuli's visual complexity. Later, in contrast, slow potentials were sensitive to the amount of orientation-dependent information only. We conclude that only orientation-dependent information is contained in the rotated representation. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Visual Impacts of Prescribed Burning on Mixed Conifer and Giant Sequoia Forests
Lin Cotton; Joe R. McBride
1987-01-01
Prescribed burning programs have evolved with little concern for the visual impact of burning and the potential prescribed burning can have in managing the forest scene. Recent criticisms by the public of the prescribed burning program at Sequoia National Park resulted in an outside review of the National Park fire management programs in Sequoia, Kings Canyon, and...
ERIC Educational Resources Information Center
Bruce, Susan; Ferrell, Kay; Luckner, John L.
2016-01-01
This paper presents the essential programming components resulting from a systematic review of research studies, legislation, and policy documents on the topic of administration issues in educational programming for students who are deaf/hard of hearing, visually impaired, or deafblind. It is recommended that educational teams should include a…
JCoDA: a tool for detecting evolutionary selection.
Steinway, Steven N; Dannenfelser, Ruth; Laucius, Christopher D; Hayes, James E; Nayak, Sudhir
2010-05-27
The incorporation of annotated sequence information from multiple related species in commonly used databases (Ensembl, Flybase, Saccharomyces Genome Database, Wormbase, etc.) has increased dramatically over the last few years. This influx of information has provided a considerable amount of raw material for evaluation of evolutionary relationships. To aid in the process, we have developed JCoDA (Java Codon Delimited Alignment) as a simple-to-use visualization tool for the detection of site specific and regional positive/negative evolutionary selection amongst homologous coding sequences. JCoDA accepts user-inputted unaligned or pre-aligned coding sequences, performs a codon-delimited alignment using ClustalW, and determines the dN/dS calculations using PAML (Phylogenetic Analysis Using Maximum Likelihood, yn00 and codeml) in order to identify regions and sites under evolutionary selection. The JCoDA package includes a graphical interface for Phylip (Phylogeny Inference Package) to generate phylogenetic trees, manages formatting of all required file types, and streamlines passage of information between underlying programs. The raw data are output to user configurable graphs with sliding window options for straightforward visualization of pairwise or gene family comparisons. Additionally, codon-delimited alignments are output in a variety of common formats and all dN/dS calculations can be output in comma-separated value (CSV) format for downstream analysis. To illustrate the types of analyses that are facilitated by JCoDA, we have taken advantage of the well studied sex determination pathway in nematodes as well as the extensive sequence information available to identify genes under positive selection, examples of regional positive selection, and differences in selection based on the role of genes in the sex determination pathway. JCoDA is a configurable, open source, user-friendly visualization tool for performing evolutionary analysis on homologous coding sequences. JCoDA can be used to rapidly screen for genes and regions of genes under selection using PAML. It can be freely downloaded at http://www.tcnj.edu/~nayaklab/jcoda.
JCoDA: a tool for detecting evolutionary selection
2010-01-01
Background The incorporation of annotated sequence information from multiple related species in commonly used databases (Ensembl, Flybase, Saccharomyces Genome Database, Wormbase, etc.) has increased dramatically over the last few years. This influx of information has provided a considerable amount of raw material for evaluation of evolutionary relationships. To aid in the process, we have developed JCoDA (Java Codon Delimited Alignment) as a simple-to-use visualization tool for the detection of site specific and regional positive/negative evolutionary selection amongst homologous coding sequences. Results JCoDA accepts user-inputted unaligned or pre-aligned coding sequences, performs a codon-delimited alignment using ClustalW, and determines the dN/dS calculations using PAML (Phylogenetic Analysis Using Maximum Likelihood, yn00 and codeml) in order to identify regions and sites under evolutionary selection. The JCoDA package includes a graphical interface for Phylip (Phylogeny Inference Package) to generate phylogenetic trees, manages formatting of all required file types, and streamlines passage of information between underlying programs. The raw data are output to user configurable graphs with sliding window options for straightforward visualization of pairwise or gene family comparisons. Additionally, codon-delimited alignments are output in a variety of common formats and all dN/dS calculations can be output in comma-separated value (CSV) format for downstream analysis. To illustrate the types of analyses that are facilitated by JCoDA, we have taken advantage of the well studied sex determination pathway in nematodes as well as the extensive sequence information available to identify genes under positive selection, examples of regional positive selection, and differences in selection based on the role of genes in the sex determination pathway. Conclusions JCoDA is a configurable, open source, user-friendly visualization tool for performing evolutionary analysis on homologous coding sequences. JCoDA can be used to rapidly screen for genes and regions of genes under selection using PAML. It can be freely downloaded at http://www.tcnj.edu/~nayaklab/jcoda. PMID:20507581
Visualization of JPEG Metadata
NASA Astrophysics Data System (ADS)
Malik Mohamad, Kamaruddin; Deris, Mustafa Mat
There are a lot of information embedded in JPEG image than just graphics. Visualization of its metadata would benefit digital forensic investigator to view embedded data including corrupted image where no graphics can be displayed in order to assist in evidence collection for cases such as child pornography or steganography. There are already available tools such as metadata readers, editors and extraction tools but mostly focusing on visualizing attribute information of JPEG Exif. However, none have been done to visualize metadata by consolidating markers summary, header structure, Huffman table and quantization table in a single program. In this paper, metadata visualization is done by developing a program that able to summarize all existing markers, header structure, Huffman table and quantization table in JPEG. The result shows that visualization of metadata helps viewing the hidden information within JPEG more easily.
Inferring on the Intentions of Others by Hierarchical Bayesian Learning
Diaconescu, Andreea O.; Mathys, Christoph; Weber, Lilian A. E.; Daunizeau, Jean; Kasper, Lars; Lomakina, Ekaterina I.; Fehr, Ernst; Stephan, Klaas E.
2014-01-01
Inferring on others' (potentially time-varying) intentions is a fundamental problem during many social transactions. To investigate the underlying mechanisms, we applied computational modeling to behavioral data from an economic game in which 16 pairs of volunteers (randomly assigned to “player” or “adviser” roles) interacted. The player performed a probabilistic reinforcement learning task, receiving information about a binary lottery from a visual pie chart. The adviser, who received more predictive information, issued an additional recommendation. Critically, the game was structured such that the adviser's incentives to provide helpful or misleading information varied in time. Using a meta-Bayesian modeling framework, we found that the players' behavior was best explained by the deployment of hierarchical learning: they inferred upon the volatility of the advisers' intentions in order to optimize their predictions about the validity of their advice. Beyond learning, volatility estimates also affected the trial-by-trial variability of decisions: participants were more likely to rely on their estimates of advice accuracy for making choices when they believed that the adviser's intentions were presently stable. Finally, our model of the players' inference predicted the players' interpersonal reactivity index (IRI) scores, explicit ratings of the advisers' helpfulness and the advisers' self-reports on their chosen strategy. Overall, our results suggest that humans (i) employ hierarchical generative models to infer on the changing intentions of others, (ii) use volatility estimates to inform decision-making in social interactions, and (iii) integrate estimates of advice accuracy with non-social sources of information. The Bayesian framework presented here can quantify individual differences in these mechanisms from simple behavioral readouts and may prove useful in future clinical studies of maladaptive social cognition. PMID:25187943
2015-09-02
human behavior. In this project, we hypothesized that visual memory of past motion trajectories may be used for selecting future behavior. In other...34Decoding sequence of actions using fMRI ", Society for Neuroscience Annual Meeting, San Diego, CA, USA, Nov 9-13 2013 (only abstract) 3. Hansol Choi, Dae...Shik Kim, "Planning as inference in a Hierarchical Predictive Memory ", Proceedings of International Conference on Neural Information Processing
Visualizing and Writing Video Programs.
ERIC Educational Resources Information Center
Floyd, Steve
1979-01-01
Reviews 10 steps which serve as guidelines to simplify the creative process of producing a video training program: (1) audience analysis, (2) task analysis, (3) definition of objective, (4) conceptualization, (5) visualization, (6) storyboard, (7) video storyboard, (8) evaluation, (9) revision, and (10) production. (LRA)
Neurolinguistic Programming Examined: Imagery, Sensory Mode, and Communication.
ERIC Educational Resources Information Center
Fromme, Donald K.; Daniell, Jennifer
1984-01-01
Tested Neurolinguistic Programming (NLP) assumptions by examining intercorrelations among response times of students (N=64) for extracting visual, auditory, and kinesthetic information from alphabetic images. Large positive intercorrelations were obtained, the only outcome not compatible with NLP. Good visualizers were significantly better in…
A Catalog of Quasar Properties from the Baryon Oscillation Spectroscopic Survey
NASA Astrophysics Data System (ADS)
Chen, Zhi-Fu; Pan, Da-Sheng; Pang, Ting-Ting; Huang, Yong
2018-01-01
Using the quasars with z em < 0.9 from the Baryon Oscillation Spectroscopic Survey, we measure the spectral characteristics, including continuum and emission lines, around the Hβ and Hα spectral regions, which are lacking in Quasar Data Release 12 (DR12Q). We estimate the virial black hole mass from broad Hα and/or Hβ, and infer quasar redshifts from [O III] λ5007 emission lines. All the measurements and derived quantities are publicly available. A comparison between [O III] λ5007 redshifts and the visual inspection redshifts included in DR12Q indicates that the visual inspection redshifts are robust. We find that the full widths at half maximum of the broad Hα are consistent with those of the broad Hβ, while both the equivalent widths and line luminosities of the broad Hα are obviously larger than the corresponding quantities of the broad Hβ. We also find that there is an obviously systematic offset between the Hβ and Hα based mass if they are inferred from the empirical relationships in the literature. Using our large quasar sample, we have improved the Hβ and Hα based mass estimators by minimizing the difference between the Hβ- and Hα-based masses. For the black hole mass estimator (Equation (1)), we find that the coefficients (a, b) = (7.00, 0.50) for Hα and (a, b) = (6.96, 0.50) for Hβ are the best choices.
CyberMedVPS: visual programming for development of simulators.
Morais, Aline M; Machado, Liliane S
2011-01-01
Computer applications based on Virtual Reality (VR) has been outstanding in training and teaching in the medical filed due to their ability to simulate realistic in which users can practice skills and decision making in different situations. But was realized in these frameworks a hard interaction of non-programmers users. Based on this problematic will be shown the CyberMedVPS, a graphical module which implement Visual Programming concepts to solve an interaction trouble. Frameworks to develop such simulators are available but their use demands knowledge of programming. Based on this problematic will be shown the CyberMedVPS, a graphical module for the CyberMed framework, which implements Visual Programming concepts to allow the development of simulators by non-programmers professionals of the medical field.
Teaching Students with Visual Impairments. Programming for Students with Special Needs. No. 5.
ERIC Educational Resources Information Center
Alberta Dept. of Education, Edmonton. Special Education Branch.
This resource guide offers suggestions and resources to help provide successful school experiences for students who are blind or visually impaired. Individual sections address: (1) the nature of visual impairment, the specific needs and expectations of students with visual impairment, and the educational implications of visual impairment; (2)…
2013-08-20
The Department of Veterans Affairs (VA) is amending its VA Health Professional Scholarship Program (HPSP) regulations. VA is also establishing regulations for a new program, the Visual Impairment and Orientation and Mobility Professional Scholarship Program (VIOMPSP). These regulations comply with and implement sections 302 and 603 of the Caregivers and Veterans Omnibus Health Services Act of 2010 (the 2010 Act). Section 302 of the 2010 Act established the VIOMPSP, which authorizes VA to provide financial assistance to certain students seeking a degree in visual impairment or orientation or mobility, in order to increase the supply of qualified blind rehabilitation specialists for VA and the United States. Section 603 of the 2010 Act reauthorized and modified HPSP, a program that provides scholarships for education or training in certain health care occupations.
GPU Computing in Bayesian Inference of Realized Stochastic Volatility Model
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2015-01-01
The realized stochastic volatility (RSV) model that utilizes the realized volatility as additional information has been proposed to infer volatility of financial time series. We consider the Bayesian inference of the RSV model by the Hybrid Monte Carlo (HMC) algorithm. The HMC algorithm can be parallelized and thus performed on the GPU for speedup. The GPU code is developed with CUDA Fortran. We compare the computational time in performing the HMC algorithm on GPU (GTX 760) and CPU (Intel i7-4770 3.4GHz) and find that the GPU can be up to 17 times faster than the CPU. We also code the program with OpenACC and find that appropriate coding can achieve the similar speedup with CUDA Fortran.
Lemmens, Karen; De Bie, Tijl; Dhollander, Thomas; De Keersmaecker, Sigrid C; Thijs, Inge M; Schoofs, Geert; De Weerdt, Ami; De Moor, Bart; Vanderleyden, Jos; Collado-Vides, Julio; Engelen, Kristof; Marchal, Kathleen
2009-01-01
We present DISTILLER, a data integration framework for the inference of transcriptional module networks. Experimental validation of predicted targets for the well-studied fumarate nitrate reductase regulator showed the effectiveness of our approach in Escherichia coli. In addition, the condition dependency and modularity of the inferred transcriptional network was studied. Surprisingly, the level of regulatory complexity seemed lower than that which would be expected from RegulonDB, indicating that complex regulatory programs tend to decrease the degree of modularity.
A general methodology for maximum likelihood inference from band-recovery data
Conroy, M.J.; Williams, B.K.
1984-01-01
A numerical procedure is described for obtaining maximum likelihood estimates and associated maximum likelihood inference from band- recovery data. The method is used to illustrate previously developed one-age-class band-recovery models, and is extended to new models, including the analysis with a covariate for survival rates and variable-time-period recovery models. Extensions to R-age-class band- recovery, mark-recapture models, and twice-yearly marking are discussed. A FORTRAN program provides computations for these models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balakumar, B J; Chavez - Alarcon, Ramiro; Shu, Fangjun
The aerodynamics of a flight-worthy, radio controlled ornithopter is investigated using a combination of Particle-Image Velocimetry (PIV), load cell measurements, and high-speed photography of smoke visualizations. The lift and thrust forces of the ornithopter are measured at various flow speeds, flapping frequencies and angles of attack to characterize the flight performance. These direct force measurements are then compared with forces estimated using control volume analysis on PIV data. High-speed photography of smoke streaks is used to visualize the evolution of leading edge vortices, and to qualitatively infer the effect of wing deformation on the net downwash. Vortical structures in themore » wake are compared to previous studies on root flapping, and direct measurements of flapping efficiency are used to argue that the current ornithopter operates sub-optimally in converting the input energy into propulsive work.« less
Chang, Rong; Little, Todd D
2018-06-01
In this article, we review three innovative methods: multiform protocols, visual analog scaling, and the retrospective pretest-posttest design that can be used in evaluation research. These three techniques have been proposed for decades, but unfortunately, they are still not utilized readily in evaluation research. Our goal is to familiarize researchers with these underutilized research techniques that could reduce personnel effort and costs for data collection while producing better inferences for a study. We begin by discussing their applications and special unique features. We then discuss each technique's strengths and limitations and offer practical tips on how to better implement these methods in evaluation research. We then showcase two recent empirical studies that implement these methods in real-world evaluation research applications.
Visual implant elastomer mark retention through metamorphosis in amphibian larvae
Campbell Grant, Evan H.
2008-01-01
Questions in population ecology require the study of marked animals, and marks are assumed to be permanent and not overlooked by observers. I evaluated retention through metamorphosis of visual implant elastomer marks in larval salamanders and frogs and assessed error in observer identification of these marks. I found 1) individual marks were not retained in larval wood frogs (Rana sylvatica), whereas only small marks were likely to be retained in larval salamanders (Eurycea bislineata), and 2) observers did not always correctly identify marked animals. Evaluating the assumptions of marking protocols is important in the design phase of a study so that correct inference can be made about the population processes of interest. This guidance should be generally useful to the design of mark–recapture studies, with particular application to studies of larval amphibians.
High-Speed Atomic Force Microscopy
NASA Astrophysics Data System (ADS)
Ando, Toshio; Uchihashi, Takayuki; Kodera, Noriyuki
2012-08-01
The technology of high-speed atomic force microscopy (HS-AFM) has reached maturity. HS-AFM enables us to directly visualize the structure and dynamics of biological molecules in physiological solutions at subsecond to sub-100 ms temporal resolution. By this microscopy, dynamically acting molecules such as myosin V walking on an actin filament and bacteriorhodopsin in response to light are successfully visualized. High-resolution molecular movies reveal the dynamic behavior of molecules in action in great detail. Inferences no longer have to be made from static snapshots of molecular structures and from the dynamic behavior of optical markers attached to biomolecules. In this review, we first describe theoretical considerations for the highest possible imaging rate, then summarize techniques involved in HS-AFM and highlight recent imaging studies. Finally, we briefly discuss future challenges to explore.
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
Sapey-Triomphe, Laurie-Anne; Sonié, Sandrine; Hénaff, Marie-Anne; Mattout, Jérémie; Schmitz, Christina
2018-04-13
The learning-style theory of Autism Spectrum Disorders (ASD) (Qian, Lipkin, Frontiers in Human Neuroscience 5:77, 2011) states that ASD individuals differ from neurotypics in the way they learn and store information about the environment and its structure. ASD would rather adopt a lookup-table strategy (LUT: memorizing each experience), while neurotypics would favor an interpolation style (INT: extracting regularities to generalize). In a series of visual behavioral tasks, we tested this hypothesis in 20 neurotypical and 20 ASD adults. ASD participants had difficulties using the INT style when instructions were hidden but not when instructions were revealed. Rather than an inability to use rules, ASD would be characterized by a disinclination to generalize and infer such rules.
Rhetorical Consequences of the Computer Society: Expert Systems and Human Communication.
ERIC Educational Resources Information Center
Skopec, Eric Wm.
Expert systems are computer programs that solve selected problems by modelling domain-specific behaviors of human experts. These computer programs typically consist of an input/output system that feeds data into the computer and retrieves advice, an inference system using the reasoning and heuristic processes of human experts, and a knowledge…
An Optimal Algorithm towards Successive Location Privacy in Sensor Networks with Dynamic Programming
NASA Astrophysics Data System (ADS)
Zhao, Baokang; Wang, Dan; Shao, Zili; Cao, Jiannong; Chan, Keith C. C.; Su, Jinshu
In wireless sensor networks, preserving location privacy under successive inference attacks is extremely critical. Although this problem is NP-complete in general cases, we propose a dynamic programming based algorithm and prove it is optimal in special cases where the correlation only exists between p immediate adjacent observations.
The Effect of the Government-Subsidized Student Loan Program on College Students in China
ERIC Educational Resources Information Center
Cheng, Baoyan
2011-01-01
Using an original dataset collected at a Chinese university and adopting a difference-in-differences research design, this study draws causal inferences regarding the effect of the Government-Subsidized Student Loan Program (GSSLP) on financially needy students at Chinese higher education institutions. Specifically, this study finds that the…
Yang, Kuang-Tao; Yang, Jen-Hung
2013-10-25
The effect of visual arts interventions on development of empathy has not been quantitatively investigated. A study was conducted on the effect of a visual arts-based program on the scores of the Jefferson Scale for Physician Empathy (JSPE). A total of 110 clerks (n = 92) and first-year postgraduate residents (PGY1s) (n = 18) participating in the program were recruited into this study. The 4-hr program covered the subjects of learning to interpret paintings, interpreting paintings relating to medicine, illness and human suffering, the related-topics of humanitarianism and the other humanities fields and values and meaning. The JSPE was completed at the beginning (pretest) and the end (posttest) of the program. There was no significant difference between the pretest and posttest JSPE scores. The average of the scores for the pretest was lower in the subgroup of PGY1s than the subgroup of clerks (p = 0.0358). An increased but not significantly mean posttest JESPE score was noted for the subgroup of PGY1s. Neither the females nor the males had higher posttest JSPE scores than the pretest scores. Although using a structured visual arts-based program as an intervention may be useful to enhance medical students' empathy, our results failed to show a positive effect on the JSPE Scores for a group of clerks and PGY1s. This suggests that further experimental studies are needed if quantitative evaluation of the effectiveness of visual-arts based programs on empathy is to be investigated.
2013-01-01
Background The effect of visual arts interventions on development of empathy has not been quantitatively investigated. A study was conducted on the effect of a visual arts-based program on the scores of the Jefferson Scale for Physician Empathy (JSPE). Methods A total of 110 clerks (n = 92) and first-year postgraduate residents (PGY1s) (n = 18) participating in the program were recruited into this study. The 4-hr program covered the subjects of learning to interpret paintings, interpreting paintings relating to medicine, illness and human suffering, the related-topics of humanitarianism and the other humanities fields and values and meaning. The JSPE was completed at the beginning (pretest) and the end (posttest) of the program. Results There was no significant difference between the pretest and posttest JSPE scores. The average of the scores for the pretest was lower in the subgroup of PGY1s than the subgroup of clerks (p = 0.0358). An increased but not significantly mean posttest JESPE score was noted for the subgroup of PGY1s. Neither the females nor the males had higher posttest JSPE scores than the pretest scores. Conclusions Although using a structured visual arts-based program as an intervention may be useful to enhance medical students’ empathy, our results failed to show a positive effect on the JSPE Scores for a group of clerks and PGY1s. This suggests that further experimental studies are needed if quantitative evaluation of the effectiveness of visual-arts based programs on empathy is to be investigated. PMID:24156472
Software Analyzes Complex Systems in Real Time
NASA Technical Reports Server (NTRS)
2008-01-01
Expert system software programs, also known as knowledge-based systems, are computer programs that emulate the knowledge and analytical skills of one or more human experts, related to a specific subject. SHINE (Spacecraft Health Inference Engine) is one such program, a software inference engine (expert system) designed by NASA for the purpose of monitoring, analyzing, and diagnosing both real-time and non-real-time systems. It was developed to meet many of the Agency s demanding and rigorous artificial intelligence goals for current and future needs. NASA developed the sophisticated and reusable software based on the experience and requirements of its Jet Propulsion Laboratory s (JPL) Artificial Intelligence Research Group in developing expert systems for space flight operations specifically, the diagnosis of spacecraft health. It was designed to be efficient enough to operate in demanding real time and in limited hardware environments, and to be utilized by non-expert systems applications written in conventional programming languages. The technology is currently used in several ongoing NASA applications, including the Mars Exploration Rovers and the Spacecraft Health Automatic Reasoning Pilot (SHARP) program for the diagnosis of telecommunication anomalies during the Neptune Voyager Encounter. It is also finding applications outside of the Space Agency.
Improve Problem Solving Skills through Adapting Programming Tools
NASA Technical Reports Server (NTRS)
Shaykhian, Linda H.; Shaykhian, Gholam Ali
2007-01-01
There are numerous ways for engineers and students to become better problem-solvers. The use of command line and visual programming tools can help to model a problem and formulate a solution through visualization. The analysis of problem attributes and constraints provide insight into the scope and complexity of the problem. The visualization aspect of the problem-solving approach tends to make students and engineers more systematic in their thought process and help them catch errors before proceeding too far in the wrong direction. The problem-solver identifies and defines important terms, variables, rules, and procedures required for solving a problem. Every step required to construct the problem solution can be defined in program commands that produce intermediate output. This paper advocates improved problem solving skills through using a programming tool. MatLab created by MathWorks, is an interactive numerical computing environment and programming language. It is a matrix-based system that easily lends itself to matrix manipulation, and plotting of functions and data. MatLab can be used as an interactive command line or a sequence of commands that can be saved in a file as a script or named functions. Prior programming experience is not required to use MatLab commands. The GNU Octave, part of the GNU project, a free computer program for performing numerical computations, is comparable to MatLab. MatLab visual and command programming are presented here.
Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays
NASA Astrophysics Data System (ADS)
Baek, Sangwook; Lee, Chulhee
2015-03-01
In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.
Kim, Sung-Min
2018-01-01
Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480
SimITK: rapid ITK prototyping using the Simulink visual programming environment
NASA Astrophysics Data System (ADS)
Dickinson, A. W. L.; Mousavi, P.; Gobbi, D. G.; Abolmaesumi, P.
2011-03-01
The Insight Segmentation and Registration Toolkit (ITK) is a long-established, software package used for image analysis, visualization, and image-guided surgery applications. This package is a collection of C++ libraries, that can pose usability problems for users without C++ programming experience. To bridge the gap between the programming complexities and the required learning curve of ITK, we present a higher-level visual programming environment that represents ITK methods and classes by wrapping them into "blocks" within MATLAB's visual programming environment, Simulink. These blocks can be connected to form workflows: visual schematics that closely represent the structure of a C++ program. Due to the heavily C++ templated nature of ITK, direct interaction between Simulink and ITK requires an intermediary to convert their respective datatypes and allow intercommunication. We have developed a "Virtual Block" that serves as an intermediate wrapper around the ITK class and is responsible for resolving the templated datatypes used by ITK to native types used by Simulink. Presently, the wrapping procedure for SimITK is semi-automatic in that it requires XML descriptions of the ITK classes as a starting point, as this data is used to create all other necessary integration files. The generation of all source code and object code from the XML is done automatically by a CMake build script that yields Simulink blocks as the final result. An example 3D segmentation workflow using cranial-CT data as well as a 3D MR-to-CT registration workflow are presented as a proof-of-concept.
DEVELOPMENTAL PALEOBIOLOGY OF THE VERTEBRATE SKELETON.
Rücklin, Martin; Donoghue, Philip C J; Cunningham, John A; Marone, Federica; Stampanoni, Marco
2014-07-01
Studies of the development of organisms can reveal crucial information on homology of structures. Developmental data are not peculiar to living organisms, and they are routinely preserved in the mineralized tissues that comprise the vertebrate skeleton, allowing us to obtain direct insight into the developmental evolution of this most formative of vertebrate innovations. The pattern of developmental processes is recorded in fossils as successive stages inferred from the gross morphology of multiple specimens and, more reliably and routinely, through the ontogenetic stages of development seen in the skeletal histology of individuals. Traditional techniques are destructive and restricted to a 2-D plane with the third dimension inferred. Effective non-invasive methods of visualizing paleohistology to reconstruct developmental stages of the skeleton are necessary. In a brief survey of paleohistological techniques we discuss the pros and cons of these methods. The use of tomographic methods to reconstruct development of organs is exemplified by the study of the placoderm dentition. Testing evidence for the presence of teeth in placoderms, the first jawed vertebrates, we compare the methods that have been used. These include inferring the development from morphology, and using serial sectioning, microCT or synchrotron X-ray tomographic microscopy (SRXTM) to reconstruct growth stages and directions of growth. The ensuing developmental interpretations are biased by the methods and degree of inference. The most direct and reliable method is using SRXTM data to trace sclerochronology. The resulting developmental data can be used to resolve homology and test hypotheses on the origin of evolutionary novelties.
Computational Neuropsychology and Bayesian Inference.
Parr, Thomas; Rees, Geraint; Friston, Karl J
2018-01-01
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.
Computational Neuropsychology and Bayesian Inference
Parr, Thomas; Rees, Geraint; Friston, Karl J.
2018-01-01
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine ‘prior’ beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology – optimal inference with suboptimal priors – and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient’s behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology. PMID:29527157
Ohl, Alisha M; Graze, Hollie; Weber, Karen; Kenny, Sabrina; Salvatore, Christie; Wagreich, Sarah
2013-01-01
This study examined the efficacy of a 10-wk Tier 1 Response to Intervention (RtI) program developed in collaboration with classroom teachers to improve the fine motor and visual-motor skills of general education kindergarten students. We recruited 113 students in six elementary schools. Two general education kindergarten classrooms at each school participated in the study. Classrooms were randomly assigned to the intervention and control groups. Fine motor skills, pencil grip, and visual-motor integration were measured at the beginning of the school year and after the 10-wk intervention. The intervention group demonstrated a statistically significant increase in fine motor and visual-motor skills, whereas the control group demonstrated a slight decline in both areas. Neither group demonstrated a change in pencil grip. This study provides preliminary evidence that a Tier 1 RtI program can improve fine motor and visual-motor skills in kindergarten students. Copyright © 2013 by the American Occupational Therapy Association, Inc.
Analogy Mapping Development for Learning Programming
NASA Astrophysics Data System (ADS)
Sukamto, R. A.; Prabawa, H. W.; Kurniawati, S.
2017-02-01
Programming skill is an important skill for computer science students, whereas nowadays, there many computer science students are lack of skills and information technology knowledges in Indonesia. This is contrary with the implementation of the ASEAN Economic Community (AEC) since the end of 2015 which is the qualified worker needed. This study provided an effort for nailing programming skills by mapping program code to visual analogies as learning media. The developed media was based on state machine and compiler principle and was implemented in C programming language. The state of every basic condition in programming were successful determined as analogy visualization.
Brady, Timothy F; Konkle, Talia; Oliva, Aude; Alvarez, George A
2009-01-01
A large body of literature has shown that observers often fail to notice significant changes in visual scenes, even when these changes happen right in front of their eyes. For instance, people often fail to notice if their conversation partner is switched to another person, or if large background objects suddenly disappear.1,2 These 'change blindness' studies have led to the inference that the amount of information we remember about each item in a visual scene may be quite low.1 However, in recent work we have demonstrated that long-term memory is capable of storing a massive number of visual objects with significant detail about each item.3 In the present paper we attempt to reconcile these findings by demonstrating that observers do not experience 'change blindness' with the real world objects used in our previous experiment if they are given sufficient time to encode each item. The results reported here suggest that one of the major causes of change blindness for real-world objects is a lack of encoding time or attention to each object (see also refs. 4 and 5).
Hunger and satiety in anorexia nervosa: fMRI during cognitive processing of food pictures.
Santel, Stephanie; Baving, Lioba; Krauel, Kerstin; Münte, Thomas F; Rotte, Michael
2006-10-09
Neuroimaging studies of visually presented food stimuli in patients with anorexia nervosa have demonstrated decreased activations in inferior parietal and visual occipital areas, and increased frontal activations relative to healthy persons, but so far no inferences could be drawn with respect to the influence of hunger or satiety. Thirteen patients with AN and 10 healthy control subjects (aged 13-21) rated visual food and non-food stimuli for pleasantness during functional magnetic resonance imaging (fMRI) in a hungry and a satiated state. AN patients rated food as less pleasant than controls. When satiated, AN patients showed decreased activation in left inferior parietal cortex relative to controls. When hungry, AN patients displayed weaker activation of the right visual occipital cortex than healthy controls. Food stimuli during satiety compared with hunger were associated with stronger right occipital activation in patients and with stronger activation in left lateral orbitofrontal cortex, the middle portion of the right anterior cingulate, and left middle temporal gyrus in controls. The observed group differences in the fMRI activation to food pictures point to decreased food-related somatosensory processing in AN during satiety and to attentional mechanisms during hunger that might facilitate restricted eating in AN.
The footprints of visual attention in the Posner cueing paradigm revealed by classification images
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Shimozaki, Steven S.; Abbey, Craig K.
2002-01-01
In the Posner cueing paradigm, observers' performance in detecting a target is typically better in trials in which the target is present at the cued location than in trials in which the target appears at the uncued location. This effect can be explained in terms of a Bayesian observer where visual attention simply weights the information differently at the cued (attended) and uncued (unattended) locations without a change in the quality of processing at each location. Alternatively, it could also be explained in terms of visual attention changing the shape of the perceptual filter at the cued location. In this study, we use the classification image technique to compare the human perceptual filters at the cued and uncued locations in a contrast discrimination task. We did not find statistically significant differences between the shapes of the inferred perceptual filters across the two locations, nor did the observed differences account for the measured cueing effects in human observers. Instead, we found a difference in the magnitude of the classification images, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.
Visualization of RNA structure models within the Integrative Genomics Viewer.
Busan, Steven; Weeks, Kevin M
2017-07-01
Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Direct visualization of hemolymph flow in the heart of a grasshopper (Schistocerca americana)
Lee, Wah-Keat; Socha, John J
2009-01-01
Background Hemolymph flow patterns in opaque insects have never been directly visualized due to the lack of an appropriate imaging technique. The required spatial and temporal resolutions, together with the lack of contrast between the hemolymph and the surrounding soft tissue, are major challenges. Previously, indirect techniques have been used to infer insect heart motion and hemolymph flow, but such methods fail to reveal fine-scale kinematics of heartbeat and details of intra-heart flow patterns. Results With the use of microbubbles as high contrast tracer particles, we directly visualized hemolymph flow in a grasshopper (Schistocerca americana) using synchrotron x-ray phase-contrast imaging. In-vivo intra-heart flow patterns and the relationship between respiratory (tracheae and air sacs) and circulatory (heart) systems were directly observed for the first time. Conclusion Synchrotron x-ray phase contrast imaging is the only generally applicable technique that has the necessary spatial, temporal resolutions and sensitivity to directly visualize heart dynamics and flow patterns inside opaque animals. This technique has the potential to illuminate many long-standing questions regarding small animal circulation, encompassing topics such as retrograde heart flow in some insects and the development of flow in embryonic vertebrates. PMID:19272159
VisANT 3.0: new modules for pathway visualization, editing, prediction and construction.
Hu, Zhenjun; Ng, David M; Yamada, Takuji; Chen, Chunnuan; Kawashima, Shuichi; Mellor, Joe; Linghu, Bolan; Kanehisa, Minoru; Stuart, Joshua M; DeLisi, Charles
2007-07-01
With the integration of the KEGG and Predictome databases as well as two search engines for coexpressed genes/proteins using data sets obtained from the Stanford Microarray Database (SMD) and Gene Expression Omnibus (GEO) database, VisANT 3.0 supports exploratory pathway analysis, which includes multi-scale visualization of multiple pathways, editing and annotating pathways using a KEGG compatible visual notation and visualization of expression data in the context of pathways. Expression levels are represented either by color intensity or by nodes with an embedded expression profile. Multiple experiments can be navigated or animated. Known KEGG pathways can be enriched by querying either coexpressed components of known pathway members or proteins with known physical interactions. Predicted pathways for genes/proteins with unknown functions can be inferred from coexpression or physical interaction data. Pathways produced in VisANT can be saved as computer-readable XML format (VisML), graphic images or high-resolution Scalable Vector Graphics (SVG). Pathways in the format of VisML can be securely shared within an interested group or published online using a simple Web link. VisANT is freely available at http://visant.bu.edu.
Venter, Jan A; Prins, Herbert H T; Mashanova, Alla; Slotow, Rob
2017-01-01
Finding suitable forage patches in a heterogeneous landscape, where patches change dynamically both spatially and temporally could be challenging to large herbivores, especially if they have no a priori knowledge of the location of the patches. We tested whether three large grazing herbivores with a variety of different traits improve their efficiency when foraging at a heterogeneous habitat patch scale by using visual cues to gain a priori knowledge about potential higher value foraging patches. For each species (zebra ( Equus burchelli ), red hartebeest ( Alcelaphus buselaphus subspecies camaa ) and eland ( Tragelaphus oryx )), we used step lengths and directionality of movement to infer whether they were using visual cues to find suitable forage patches at a habitat patch scale. Step lengths were significantly longer for all species when moving to non-visible patches than to visible patches, but all movements showed little directionality. Of the three species, zebra movements were the most directional. Red hartebeest had the shortest step lengths and zebra the longest. We conclude that these large grazing herbivores may not exclusively use visual cues when foraging at a habitat patch scale, but would rather adapt their movement behaviour, mainly step length, to the heterogeneity of the specific landscape.
Software attribute visualization for high integrity software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollock, G.M.
1998-03-01
This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.
Learning sorting algorithms through visualization construction
NASA Astrophysics Data System (ADS)
Cetin, Ibrahim; Andrews-Larson, Christine
2016-01-01
Recent increased interest in computational thinking poses an important question to researchers: What are the best ways to teach fundamental computing concepts to students? Visualization is suggested as one way of supporting student learning. This mixed-method study aimed to (i) examine the effect of instruction in which students constructed visualizations on students' programming achievement and students' attitudes toward computer programming, and (ii) explore how this kind of instruction supports students' learning according to their self-reported experiences in the course. The study was conducted with 58 pre-service teachers who were enrolled in their second programming class. They expect to teach information technology and computing-related courses at the primary and secondary levels. An embedded experimental model was utilized as a research design. Students in the experimental group were given instruction that required students to construct visualizations related to sorting, whereas students in the control group viewed pre-made visualizations. After the instructional intervention, eight students from each group were selected for semi-structured interviews. The results showed that the intervention based on visualization construction resulted in significantly better acquisition of sorting concepts. However, there was no significant difference between the groups with respect to students' attitudes toward computer programming. Qualitative data analysis indicated that students in the experimental group constructed necessary abstractions through their engagement in visualization construction activities. The authors of this study argue that the students' active engagement in the visualization construction activities explains only one side of students' success. The other side can be explained through the instructional approach, constructionism in this case, used to design instruction. The conclusions and implications of this study can be used by researchers and instructors dealing with computational thinking.
Exploring the Engagement Effects of Visual Programming Language for Data Structure Courses
ERIC Educational Resources Information Center
Chang, Chih-Kai; Yang, Ya-Fei; Tsai, Yu-Tzu
2017-01-01
Previous research indicates that understanding the state of learning motivation enables researchers to deeply understand students' learning processes. Studies have shown that visual programming languages use graphical code, enabling learners to learn effectively, improve learning effectiveness, increase learning fun, and offering various other…
Multitask visual learning using genetic programming.
Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz
2008-01-01
We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.
Surgical simulation tasks challenge visual working memory and visual-spatial ability differently.
Schlickum, Marcus; Hedman, Leif; Enochsson, Lars; Henningsohn, Lars; Kjellin, Ann; Felländer-Tsai, Li
2011-04-01
New strategies for selection and training of physicians are emerging. Previous studies have demonstrated a correlation between visual-spatial ability and visual working memory with surgical simulator performance. The aim of this study was to perform a detailed analysis on how these abilities are associated with metrics in simulator performance with different task content. The hypothesis is that the importance of visual-spatial ability and visual working memory varies with different task contents. Twenty-five medical students participated in the study that involved testing visual-spatial ability using the MRT-A test and visual working memory using the RoboMemo computer program. Subjects were also trained and tested for performance in three different surgical simulators. The scores from the psychometric tests and the performance metrics were then correlated using multivariate analysis. MRT-A score correlated significantly with the performance metrics Efficiency of screening (p = 0.006) and Total time (p = 0.01) in the GI Mentor II task and Total score (p = 0.02) in the MIST-VR simulator task. In the Uro Mentor task, both the MRT-A score and the visual working memory 3-D cube test score as presented in the RoboMemo program (p = 0.02) correlated with Total score (p = 0.004). In this study we have shown that some differences exist regarding the impact of visual abilities and task content on simulator performance. When designing future cognitive training programs and testing regimes, one might have to consider that the design must be adjusted in accordance with the specific surgical task to be trained in mind.
Sampling design trade-offs in occupancy studies with imperfect detection: examples and software
Bailey, L.L.; Hines, J.E.; Nichols, J.D.
2007-01-01
Researchers have used occupancy, or probability of occupancy, as a response or state variable in a variety of studies (e.g., habitat modeling), and occupancy is increasingly favored by numerous state, federal, and international agencies engaged in monitoring programs. Recent advances in estimation methods have emphasized that reliable inferences can be made from these types of studies if detection and occupancy probabilities are simultaneously estimated. The need for temporal replication at sampled sites to estimate detection probability creates a trade-off between spatial replication (number of sample sites distributed within the area of interest/inference) and temporal replication (number of repeated surveys at each site). Here, we discuss a suite of questions commonly encountered during the design phase of occupancy studies, and we describe software (program GENPRES) developed to allow investigators to easily explore design trade-offs focused on particularities of their study system and sampling limitations. We illustrate the utility of program GENPRES using an amphibian example from Greater Yellowstone National Park, USA.
Interactive Problem Solving Tutorials Through Visual Programming
NASA Astrophysics Data System (ADS)
Undreiu, Lucian; Schuster, David; Undreiu, Adriana
2008-10-01
We have used LabVIEW visual programming to build an interactive tutorial to promote conceptual understanding in physics problem solving. This programming environment is able to offer a web-accessible problem solving experience that enables students to work at their own pace and receive feedback. Intuitive graphical symbols, modular structures and the ability to create templates are just a few of the advantages this software has to offer. The architecture of an application can be designed in a way that allows instructors with little knowledge of LabVIEW to easily personalize it. Both the physics solution and the interactive pedagogy can be visually programmed in LabVIEW. Our physics pedagogy approach is that of cognitive apprenticeship, in that the tutorial guides students to develop conceptual understanding and physical insight into phenomena, rather than purely formula-based solutions. We demonstrate how this model is reflected in the design and programming of the interactive tutorials.
The Efficiency of a Visual Skills Training Program on Visual Search Performance
Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech
2015-01-01
In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666
Graphical programming interface: A development environment for MRI methods.
Zwart, Nicholas R; Pipe, James G
2015-11-01
To introduce a multiplatform, Python language-based, development environment called graphical programming interface for prototyping MRI techniques. The interface allows developers to interact with their scientific algorithm prototypes visually in an event-driven environment making tasks such as parameterization, algorithm testing, data manipulation, and visualization an integrated part of the work-flow. Algorithm developers extend the built-in functionality through simple code interfaces designed to facilitate rapid implementation. This article shows several examples of algorithms developed in graphical programming interface including the non-Cartesian MR reconstruction algorithms for PROPELLER and spiral as well as spin simulation and trajectory visualization of a FLORET example. The graphical programming interface framework is shown to be a versatile prototyping environment for developing numeric algorithms used in the latest MR techniques. © 2014 Wiley Periodicals, Inc.
SEM (Symmetry Equivalent Molecules): a web-based GUI to generate and visualize the macromolecules
Hussain, A. S. Z.; Kumar, Ch. Kiran; Rajesh, C. K.; Sheik, S. S.; Sekar, K.
2003-01-01
SEM, Symmetry Equivalent Molecules, is a web-based graphical user interface to generate and visualize the symmetry equivalent molecules (proteins and nucleic acids). In addition, the program allows the users to save the three-dimensional atomic coordinates of the symmetry equivalent molecules in the local machine. The widely recognized graphics program RasMol has been deployed to visualize the reference (input atomic coordinates) and the symmetry equivalent molecules. This program is written using CGI/Perl scripts and has been interfaced with all the three-dimensional structures (solved using X-ray crystallography) available in the Protein Data Bank. The program, SEM, can be accessed over the World Wide Web interface at http://dicsoft2.physics.iisc.ernet.in/sem/ or http://144.16.71.11/sem/. PMID:12824326
Crowley, D Max; Coffman, Donna L; Feinberg, Mark E; Greenberg, Mark T; Spoth, Richard L
2014-04-01
Despite growing recognition of the important role implementation plays in successful prevention efforts, relatively little work has sought to demonstrate a causal relationship between implementation factors and participant outcomes. In turn, failure to explore the implementation-to-outcome link limits our understanding of the mechanisms essential to successful programming. This gap is partially due to the inability of current methodological procedures within prevention science to account for the multitude of confounders responsible for variation in implementation factors (i.e., selection bias). The current paper illustrates how propensity and marginal structural models can be used to improve causal inferences involving implementation factors not easily randomized (e.g., participant attendance). We first present analytic steps for simultaneously evaluating the impact of multiple implementation factors on prevention program outcome. Then, we demonstrate this approach for evaluating the impact of enrollment and attendance in a family program, over and above the impact of a school-based program, within PROSPER, a large-scale real-world prevention trial. Findings illustrate the capacity of this approach to successfully account for confounders that influence enrollment and attendance, thereby more accurately representing true causal relations. For instance, after accounting for selection bias, we observed a 5% reduction in the prevalence of 11th grade underage drinking for those who chose to receive a family program and school program compared to those who received only the school program. Further, we detected a 7% reduction in underage drinking for those with high attendance in the family program.
2015-08-28
for the scene, and effectively isolates the points on buildings. We are now able to accurately filter in buildings, and filter out the ground, but...brushing hair and hugging. Time Action running kids Agent Motion rolling ball panning camera waves crashing Figure 3: Our work distinguishes inten- tional...action of an unknown agent (the kids in this example) from various other motions, such as the rolling ball, the crashing waves and the background mo