Sample records for sequence feature visualization

  1. Visualization of protein sequence features using JavaScript and SVG with pViz.js.

    PubMed

    Mukhyala, Kiran; Masselot, Alexandre

    2014-12-01

    pViz.js is a visualization library for displaying protein sequence features in a Web browser. By simply providing a sequence and the locations of its features, this lightweight, yet versatile, JavaScript library renders an interactive view of the protein features. Interactive exploration of protein sequence features over the Web is a common need in Bioinformatics. Although many Web sites have developed viewers to display these features, their implementations are usually focused on data from a specific source or use case. Some of these viewers can be adapted to fit other use cases but are not designed to be reusable. pViz makes it easy to display features as boxes aligned to a protein sequence with zooming functionality but also includes predefined renderings for secondary structure and post-translational modifications. The library is designed to further customize this view. We demonstrate such applications of pViz using two examples: a proteomic data visualization tool with an embedded viewer for displaying features on protein structure, and a tool to visualize the results of the variant_effect_predictor tool from Ensembl. pViz.js is a JavaScript library, available on github at https://github.com/Genentech/pviz. This site includes examples and functional applications, installation instructions and usage documentation. A Readme file, which explains how to use pViz with examples, is available as Supplementary Material A. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Coding visual features extracted from video sequences.

    PubMed

    Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2014-05-01

    Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.

  3. Preattentive binding of auditory and visual stimulus features.

    PubMed

    Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo

    2005-02-01

    We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.

  4. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  5. Sockeye: A 3D Environment for Comparative Genomics

    PubMed Central

    Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.

    2004-01-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  6. BioSAVE: display of scored annotation within a sequence context.

    PubMed

    Pollock, Richard F; Adryan, Boris

    2008-03-20

    Visualization of sequence annotation is a common feature in many bioinformatics tools. For many applications it is desirable to restrict the display of such annotation according to a score cutoff, as biological interpretation can be difficult in the presence of the entire data. Unfortunately, many visualisation solutions are somewhat static in the way they handle such score cutoffs. We present BioSAVE, a sequence annotation viewer with on-the-fly selection of visualisation thresholds for each feature. BioSAVE is a versatile OS X program for visual display of scored features (annotation) within a sequence context. The program reads sequence and additional supplementary annotation data (e.g., position weight matrix matches, conservation scores, structural domains) from a variety of commonly used file formats and displays them graphically. Onscreen controls then allow for live customisation of these graphics, including on-the-fly selection of visualisation thresholds for each feature. Possible applications of the program include display of transcription factor binding sites in a genomic context or the visualisation of structural domain assignments in protein sequences and many more. The dynamic visualisation of these annotations is useful, e.g., for the determination of cutoff values of predicted features to match experimental data. Program, source code and exemplary files are freely available at the BioSAVE homepage.

  7. BioSAVE: Display of scored annotation within a sequence context

    PubMed Central

    Pollock, Richard F; Adryan, Boris

    2008-01-01

    Background Visualization of sequence annotation is a common feature in many bioinformatics tools. For many applications it is desirable to restrict the display of such annotation according to a score cutoff, as biological interpretation can be difficult in the presence of the entire data. Unfortunately, many visualisation solutions are somewhat static in the way they handle such score cutoffs. Results We present BioSAVE, a sequence annotation viewer with on-the-fly selection of visualisation thresholds for each feature. BioSAVE is a versatile OS X program for visual display of scored features (annotation) within a sequence context. The program reads sequence and additional supplementary annotation data (e.g., position weight matrix matches, conservation scores, structural domains) from a variety of commonly used file formats and displays them graphically. Onscreen controls then allow for live customisation of these graphics, including on-the-fly selection of visualisation thresholds for each feature. Conclusion Possible applications of the program include display of transcription factor binding sites in a genomic context or the visualisation of structural domain assignments in protein sequences and many more. The dynamic visualisation of these annotations is useful, e.g., for the determination of cutoff values of predicted features to match experimental data. Program, source code and exemplary files are freely available at the BioSAVE homepage. PMID:18366701

  8. Using Tablet for visual exploration of second-generation sequencing data.

    PubMed

    Milne, Iain; Stephen, Gordon; Bayer, Micha; Cock, Peter J A; Pritchard, Leighton; Cardle, Linda; Shaw, Paul D; Marshall, David

    2013-03-01

    The advent of second-generation sequencing (2GS) has provided a range of significant new challenges for the visualization of sequence assemblies. These include the large volume of data being generated, short-read lengths and different data types and data formats associated with the diversity of new sequencing technologies. This article illustrates how Tablet-a high-performance graphical viewer for visualization of 2GS assemblies and read mappings-plays an important role in the analysis of these data. We present Tablet, and through a selection of use cases, demonstrate its value in quality assurance and scientific discovery, through features such as whole-reference coverage overviews, variant highlighting, paired-end read mark-up, GFF3-based feature tracks and protein translations. We discuss the computing and visualization techniques utilized to provide a rich and responsive graphical environment that enables users to view a range of file formats with ease. Tablet installers can be freely downloaded from http://bioinf.hutton.ac.uk/tablet in 32 or 64-bit versions for Windows, OS X, Linux or Solaris. For further details on the Tablet, contact tablet@hutton.ac.uk.

  9. Picture Detection in Rapid Serial Visual Presentation: Features or Identity?

    ERIC Educational Resources Information Center

    Potter, Mary C.; Wyble, Brad; Pandav, Rijuta; Olejarczyk, Jennifer

    2010-01-01

    A pictured object can be readily detected in a rapid serial visual presentation sequence when the target is specified by a superordinate category name such as "animal" or "vehicle". Are category features the initial basis for detection, with identification of the specific object occurring in a second stage (Evans &…

  10. STAR: an integrated solution to management and visualization of sequencing data.

    PubMed

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W; Ecker, Joseph R; Millar, A Harvey; Ren, Bing; Wang, Wei

    2013-12-15

    Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser.

  11. Driving on the surface of Mars with the rover sequencing and visualization program

    NASA Technical Reports Server (NTRS)

    Wright, J.; Hartman, F.; Cooper, B.; Maxwell, S.; Yen, J.; Morrison, J.

    2005-01-01

    Operating a rover on Mars is not possible using teleoperations due to the distance involved and the bandwith limitations. To operate these rovers requires sophisticated tools to make operators knowledgeable of the terrain, hazards, features of interest, and rover state and limitations, and to support building command sequences and rehearsing expected operations. This paper discusses how the Rover Sequencing and Visualization program and a small set of associated tools support this requirement.

  12. FASMA: a service to format and analyze sequences in multiple alignments.

    PubMed

    Costantini, Susan; Colonna, Giovanni; Facchiano, Angelo M

    2007-12-01

    Multiple sequence alignments are successfully applied in many studies for under- standing the structural and functional relations among single nucleic acids and protein sequences as well as whole families. Because of the rapid growth of sequence databases, multiple sequence alignments can often be very large and difficult to visualize and analyze. We offer a new service aimed to visualize and analyze the multiple alignments obtained with different external algorithms, with new features useful for the comparison of the aligned sequences as well as for the creation of a final image of the alignment. The service is named FASMA and is available at http://bioinformatica.isa.cnr.it/FASMA/.

  13. STAR: an integrated solution to management and visualization of sequencing data

    PubMed Central

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W.; Ecker, Joseph R.; Millar, A. Harvey; Ren, Bing; Wang, Wei

    2013-01-01

    Motivation: Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. Results: STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. Availability and implementation: STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser. Contact: wei-wang@ucsd.edu PMID:24078702

  14. Sequence Bundles: a novel method for visualising, discovering and exploring sequence motifs

    PubMed Central

    2014-01-01

    Background We introduce Sequence Bundles--a novel data visualisation method for representing multiple sequence alignments (MSAs). We identify and address key limitations of the existing bioinformatics data visualisation methods (i.e. the Sequence Logo) by enabling Sequence Bundles to give salient visual expression to sequence motifs and other data features, which would otherwise remain hidden. Methods For the development of Sequence Bundles we employed research-led information design methodologies. Sequences are encoded as uninterrupted, semi-opaque lines plotted on a 2-dimensional reconfigurable grid. Each line represents a single sequence. The thickness and opacity of the stack at each residue in each position indicates the level of conservation and the lines' curved paths expose patterns in correlation and functionality. Several MSAs can be visualised in a composite image. The Sequence Bundles method is designed to favour a tangible, continuous and intuitive display of information. Results We have developed a software demonstration application for generating a Sequence Bundles visualisation of MSAs provided for the BioVis 2013 redesign contest. A subsequent exploration of the visualised line patterns allowed for the discovery of a number of interesting features in the dataset. Reported features include the extreme conservation of sequences displaying a specific residue and bifurcations of the consensus sequence. Conclusions Sequence Bundles is a novel method for visualisation of MSAs and the discovery of sequence motifs. It can aid in generating new insight and hypothesis making. Sequence Bundles is well disposed for future implementation as an interactive visual analytics software, which can complement existing visualisation tools. PMID:25237395

  15. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  16. PrimerMapper: high throughput primer design and graphical assembly for PCR and SNP detection

    PubMed Central

    O’Halloran, Damien M.

    2016-01-01

    Primer design represents a widely employed gambit in diverse molecular applications including PCR, sequencing, and probe hybridization. Variations of PCR, including primer walking, allele-specific PCR, and nested PCR provide specialized validation and detection protocols for molecular analyses that often require screening large numbers of DNA fragments. In these cases, automated sequence retrieval and processing become important features, and furthermore, a graphic that provides the user with a visual guide to the distribution of designed primers across targets is most helpful in quickly ascertaining primer coverage. To this end, I describe here, PrimerMapper, which provides a comprehensive graphical user interface that designs robust primers from any number of inputted sequences while providing the user with both, graphical maps of primer distribution for each inputted sequence, and also a global assembled map of all inputted sequences with designed primers. PrimerMapper also enables the visualization of graphical maps within a browser and allows the user to draw new primers directly onto the webpage. Other features of PrimerMapper include allele-specific design features for SNP genotyping, a remote BLAST window to NCBI databases, and remote sequence retrieval from GenBank and dbSNP. PrimerMapper is hosted at GitHub and freely available without restriction. PMID:26853558

  17. Sequence alignment visualization in HTML5 without Java.

    PubMed

    Gille, Christoph; Birgit, Weyand; Gille, Andreas

    2014-01-01

    Java has been extensively used for the visualization of biological data in the web. However, the Java runtime environment is an additional layer of software with an own set of technical problems and security risks. HTML in its new version 5 provides features that for some tasks may render Java unnecessary. Alignment-To-HTML is the first HTML-based interactive visualization for annotated multiple sequence alignments. The server side script interpreter can perform all tasks like (i) sequence retrieval, (ii) alignment computation, (iii) rendering, (iv) identification of a homologous structural models and (v) communication with BioDAS-servers. The rendered alignment can be included in web pages and is displayed in all browsers on all platforms including touch screen tablets. The functionality of the user interface is similar to legacy Java applets and includes color schemes, highlighting of conserved and variable alignment positions, row reordering by drag and drop, interlinked 3D visualization and sequence groups. Novel features are (i) support for multiple overlapping residue annotations, such as chemical modifications, single nucleotide polymorphisms and mutations, (ii) mechanisms to quickly hide residue annotations, (iii) export to MS-Word and (iv) sequence icons. Alignment-To-HTML, the first interactive alignment visualization that runs in web browsers without additional software, confirms that to some extend HTML5 is already sufficient to display complex biological data. The low speed at which programs are executed in browsers is still the main obstacle. Nevertheless, we envision an increased use of HTML and JavaScript for interactive biological software. Under GPL at: http://www.bioinformatics.org/strap/toHTML/.

  18. Object Tracking Using Adaptive Covariance Descriptor and Clustering-Based Model Updating for Visual Surveillance

    PubMed Central

    Qin, Lei; Snoussi, Hichem; Abdallah, Fahed

    2014-01-01

    We propose a novel approach for tracking an arbitrary object in video sequences for visual surveillance. The first contribution of this work is an automatic feature extraction method that is able to extract compact discriminative features from a feature pool before computing the region covariance descriptor. As the feature extraction method is adaptive to a specific object of interest, we refer to the region covariance descriptor computed using the extracted features as the adaptive covariance descriptor. The second contribution is to propose a weakly supervised method for updating the object appearance model during tracking. The method performs a mean-shift clustering procedure among the tracking result samples accumulated during a period of time and selects a group of reliable samples for updating the object appearance model. As such, the object appearance model is kept up-to-date and is prevented from contamination even in case of tracking mistakes. We conducted comparing experiments on real-world video sequences, which confirmed the effectiveness of the proposed approaches. The tracking system that integrates the adaptive covariance descriptor and the clustering-based model updating method accomplished stable object tracking on challenging video sequences. PMID:24865883

  19. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    NASA Astrophysics Data System (ADS)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  20. Analysis and Visualization of ChIP-Seq and RNA-Seq Sequence Alignments Using ngs.plot.

    PubMed

    Loh, Yong-Hwee Eddie; Shen, Li

    2016-01-01

    The continual maturation and increasing applications of next-generation sequencing technology in scientific research have yielded ever-increasing amounts of data that need to be effectively and efficiently analyzed and innovatively mined for new biological insights. We have developed ngs.plot-a quick and easy-to-use bioinformatics tool that performs visualizations of the spatial relationships between sequencing alignment enrichment and specific genomic features or regions. More importantly, ngs.plot is customizable beyond the use of standard genomic feature databases to allow the analysis and visualization of user-specified regions of interest generated by the user's own hypotheses. In this protocol, we demonstrate and explain the use of ngs.plot using command line executions, as well as a web-based workflow on the Galaxy framework. We replicate the underlying commands used in the analysis of a true biological dataset that we had reported and published earlier and demonstrate how ngs.plot can easily generate publication-ready figures. With ngs.plot, users would be able to efficiently and innovatively mine their own datasets without having to be involved in the technical aspects of sequence coverage calculations and genomic databases.

  1. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  2. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  3. Visualizing bacterial tRNA identity determinants and antideterminants using function logos and inverse function logos

    PubMed Central

    Freyhult, Eva; Moulton, Vincent; Ardell, David H.

    2006-01-01

    Sequence logos are stacked bar graphs that generalize the notion of consensus sequence. They employ entropy statistics very effectively to display variation in a structural alignment of sequences of a common function, while emphasizing its over-represented features. Yet sequence logos cannot display features that distinguish functional subclasses within a structurally related superfamily nor do they display under-represented features. We introduce two extensions to address these needs: function logos and inverse logos. Function logos display subfunctions that are over-represented among sequences carrying a specific feature. Inverse logos generalize both sequence logos and function logos by displaying under-represented, rather than over-represented, features or functions in structural alignments. To make inverse logos, a compositional inverse is applied to the feature or function frequency distributions before logo construction, where a compositional inverse is a mathematical transform that makes common features or functions rare and vice versa. We applied these methods to a database of structurally aligned bacterial tDNAs to create highly condensed, birds-eye views of potentially all so-called identity determinants and antideterminants that confer specific amino acid charging or initiator function on tRNAs in bacteria. We recovered both known and a few potentially novel identity elements. Function logos and inverse logos are useful tools for exploratory bioinformatic analysis of structure–function relationships in sequence families and superfamilies. PMID:16473848

  4. repRNA: a web server for generating various feature vectors of RNA sequences.

    PubMed

    Liu, Bin; Liu, Fule; Fang, Longyun; Wang, Xiaolong; Chou, Kuo-Chen

    2016-02-01

    With the rapid growth of RNA sequences generated in the postgenomic age, it is highly desired to develop a flexible method that can generate various kinds of vectors to represent these sequences by focusing on their different features. This is because nearly all the existing machine-learning methods, such as SVM (support vector machine) and KNN (k-nearest neighbor), can only handle vectors but not sequences. To meet the increasing demands and speed up the genome analyses, we have developed a new web server, called "representations of RNA sequences" (repRNA). Compared with the existing methods, repRNA is much more comprehensive, flexible and powerful, as reflected by the following facts: (1) it can generate 11 different modes of feature vectors for users to choose according to their investigation purposes; (2) it allows users to select the features from 22 built-in physicochemical properties and even those defined by users' own; (3) the resultant feature vectors and the secondary structures of the corresponding RNA sequences can be visualized. The repRNA web server is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/repRNA/ .

  5. Coding Local and Global Binary Visual Features Extracted From Video Sequences.

    PubMed

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.

  6. Coding Local and Global Binary Visual Features Extracted From Video Sequences

    NASA Astrophysics Data System (ADS)

    Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano

    2015-11-01

    Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.

  7. The course of visual searching to a target in a fixed location: electrophysiological evidence from an emotional flanker task.

    PubMed

    Dong, Guangheng; Yang, Lizhu; Shen, Yue

    2009-08-21

    The present study investigated the course of visual searching to a target in a fixed location, using an emotional flanker task. Event-related potentials (ERPs) were recorded while participants performed the task. Emotional facial expressions were used as emotion-eliciting triggers. The course of visual searching was analyzed through the emotional effects arising from these emotion-eliciting stimuli. The flanker stimuli showed effects at about 150-250 ms following the stimulus onset, while the effect of target stimuli showed effects at about 300-400 ms. The visual search sequence in an emotional flanker task moved from a whole overview to a specific target, even if the target always appeared at a known location. The processing sequence was "parallel" in this task. The results supported the feature integration theory of visual search.

  8. JDet: interactive calculation and visualization of function-related conservation patterns in multiple sequence alignments and structures.

    PubMed

    Muth, Thilo; García-Martín, Juan A; Rausell, Antonio; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2012-02-15

    We have implemented in a single package all the features required for extracting, visualizing and manipulating fully conserved positions as well as those with a family-dependent conservation pattern in multiple sequence alignments. The program allows, among other things, to run different methods for extracting these positions, combine the results and visualize them in protein 3D structures and sequence spaces. JDet is a multiplatform application written in Java. It is freely available, including the source code, at http://csbg.cnb.csic.es/JDet. The package includes two of our recently developed programs for detecting functional positions in protein alignments (Xdet and S3Det), and support for other methods can be added as plug-ins. A help file and a guided tutorial for JDet are also available.

  9. The research and application of visual saliency and adaptive support vector machine in target tracking field.

    PubMed

    Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing

    2013-01-01

    The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.

  10. D-peaks: a visual tool to display ChIP-seq peaks along the genome.

    PubMed

    Brohée, Sylvain; Bontempi, Gianluca

    2012-01-01

    ChIP-sequencing is a method of choice to localize the positions of protein binding sites on DNA on a whole genomic scale. The deciphering of the sequencing data produced by this novel technique is challenging and it is achieved by their rigorous interpretation using dedicated tools and adapted visualization programs. Here, we present a bioinformatics tool (D-peaks) that adds several possibilities (including, user-friendliness, high-quality, relative position with respect to the genomic features) to the well-known visualization browsers or databases already existing. D-peaks is directly available through its web interface http://rsat.ulb.ac.be/dpeaks/ as well as a command line tool.

  11. Using dynamic geometry software for teaching conditional probability with area-proportional Venn diagrams

    NASA Astrophysics Data System (ADS)

    Radakovic, Nenad; McDougall, Douglas

    2012-10-01

    This classroom note illustrates how dynamic visualization can be used to teach conditional probability and Bayes' theorem. There are two features of the visualization that make it an ideal pedagogical tool in probability instruction. The first feature is the use of area-proportional Venn diagrams that, along with showing qualitative relationships, describe the quantitative relationship between two sets. The second feature is the slider and animation component of dynamic geometry software enabling students to observe how the change in the base rate of an event influences conditional probability. A hypothetical instructional sequence using a well-known breast cancer example is described.

  12. SeqDepot: streamlined database of biological sequences and precomputed features.

    PubMed

    Ulrich, Luke E; Zhulin, Igor B

    2014-01-15

    Assembling and/or producing integrated knowledge of sequence features continues to be an onerous and redundant task despite a large number of existing resources. We have developed SeqDepot-a novel database that focuses solely on two primary goals: (i) assimilating known primary sequences with predicted feature data and (ii) providing the most simple and straightforward means to procure and readily use this information. Access to >28.5 million sequences and 300 million features is provided through a well-documented and flexible RESTful interface that supports fetching specific data subsets, bulk queries, visualization and searching by MD5 digests or external database identifiers. We have also developed an HTML5/JavaScript web application exemplifying how to interact with SeqDepot and Perl/Python scripts for use with local processing pipelines. Freely available on the web at http://seqdepot.net/. RESTaccess via http://seqdepot.net/api/v1. Database files and scripts maybe downloaded from http://seqdepot.net/download.

  13. GazeAppraise v. 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel

    GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less

  14. An Action Sequence Withheld in Memory Can Delay Execution of Visually Guided Actions: The Generalization of Response Compatibility Interference

    ERIC Educational Resources Information Center

    Wiediger, Matthew D.; Fournier, Lisa R.

    2008-01-01

    Withholding an action plan in memory for later execution can delay execution of another action, if the actions share a similar (compatible) action feature (i.e., response hand). This phenomenon, termed compatibility interference (CI), was found for identity-based actions that do not require visual guidance. The authors examined whether CI can…

  15. ProtVista: visualization of protein sequence annotations.

    PubMed

    Watkins, Xavier; Garcia, Leyla J; Pundir, Sangya; Martin, Maria J

    2017-07-01

    ProtVista is a comprehensive visualization tool for the graphical representation of protein sequence features in the UniProt Knowledgebase, experimental proteomics and variation public datasets. The complexity and relationships in this wealth of data pose a challenge in interpretation. Integrative visualization approaches such as provided by ProtVista are thus essential for researchers to understand the data and, for instance, discover patterns affecting function and disease associations. ProtVista is a JavaScript component released as an open source project under the Apache 2 License. Documentation and source code are available at http://ebi-uniprot.github.io/ProtVista/ . martin@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  16. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  17. An insect-inspired model for visual binding II: functional analysis and visual attention.

    PubMed

    Northcutt, Brandon D; Higgins, Charles M

    2017-04-01

    We have developed a neural network model capable of performing visual binding inspired by neuronal circuitry in the optic glomeruli of flies: a brain area that lies just downstream of the optic lobes where early visual processing is performed. This visual binding model is able to detect objects in dynamic image sequences and bind together their respective characteristic visual features-such as color, motion, and orientation-by taking advantage of their common temporal fluctuations. Visual binding is represented in the form of an inhibitory weight matrix which learns over time which features originate from a given visual object. In the present work, we show that information represented implicitly in this weight matrix can be used to explicitly count the number of objects present in the visual image, to enumerate their specific visual characteristics, and even to create an enhanced image in which one particular object is emphasized over others, thus implementing a simple form of visual attention. Further, we present a detailed analysis which reveals the function and theoretical limitations of the visual binding network and in this context describe a novel network learning rule which is optimized for visual binding.

  18. LoopX: A Graphical User Interface-Based Database for Comprehensive Analysis and Comparative Evaluation of Loops from Protein Structures.

    PubMed

    Kadumuri, Rajashekar Varma; Vadrevu, Ramakrishna

    2017-10-01

    Due to their crucial role in function, folding, and stability, protein loops are being targeted for grafting/designing to create novel or alter existing functionality and improve stability and foldability. With a view to facilitate a thorough analysis and effectual search options for extracting and comparing loops for sequence and structural compatibility, we developed, LoopX a comprehensively compiled library of sequence and conformational features of ∼700,000 loops from protein structures. The database equipped with a graphical user interface is empowered with diverse query tools and search algorithms, with various rendering options to visualize the sequence- and structural-level information along with hydrogen bonding patterns, backbone φ, ψ dihedral angles of both the target and candidate loops. Two new features (i) conservation of the polar/nonpolar environment and (ii) conservation of sequence and conformation of specific residues within the loops have also been incorporated in the search and retrieval of compatible loops for a chosen target loop. Thus, the LoopX server not only serves as a database and visualization tool for sequence and structural analysis of protein loops but also aids in extracting and comparing candidate loops for a given target loop based on user-defined search options.

  19. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data.

    PubMed

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data.

  20. Evaluating a robust contour tracker on echocardiographic sequences.

    PubMed

    Jacob, G; Noble, J A; Mulet-Parada, M; Blake, A

    1999-03-01

    In this paper we present an evaluation of a robust visual image tracker on echocardiographic image sequences. We show how the tracking framework can be customized to define an appropriate shape space that describes heart shape deformations that can be learnt from a training data set. We also investigate energy-based temporal boundary enhancement methods to improve image feature measurement. Results are presented demonstrating real-time tracking on real normal heart motion data sequences and abnormal synthesized and real heart motion data sequences. We conclude by discussing some of our current research efforts.

  1. Spreadsheet macros for coloring sequence alignments.

    PubMed

    Haygood, M G

    1993-12-01

    This article describes a set of Microsoft Excel macros designed to color amino acid and nucleotide sequence alignments for review and preparation of visual aids. The colored alignments can then be modified to emphasize features of interest. Procedures for importing and coloring sequences are described. The macro file adds a new menu to the menu bar containing sequence-related commands to enable users unfamiliar with Excel to use the macros more readily. The macros were designed for use with Macintosh computers but will also run with the DOS version of Excel.

  2. Deterministic object tracking using Gaussian ringlet and directional edge features

    NASA Astrophysics Data System (ADS)

    Krieger, Evan W.; Sidike, Paheding; Aspiras, Theus; Asari, Vijayan K.

    2017-10-01

    Challenges currently existing for intensity-based histogram feature tracking methods in wide area motion imagery (WAMI) data include object structural information distortions, background variations, and object scale change. These issues are caused by different pavement or ground types and from changing the sensor or altitude. All of these challenges need to be overcome in order to have a robust object tracker, while attaining a computation time appropriate for real-time processing. To achieve this, we present a novel method, Directional Ringlet Intensity Feature Transform (DRIFT), which employs Kirsch kernel filtering for edge features and a ringlet feature mapping for rotational invariance. The method also includes an automatic scale change component to obtain accurate object boundaries and improvements for lowering computation times. We evaluated the DRIFT algorithm on two challenging WAMI datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness and efficiency. Additional evaluations on general tracking video sequences are performed using the Visual Tracker Benchmark and Visual Object Tracking 2014 databases to demonstrate the algorithms ability with additional challenges in long complex sequences including scale change. Experimental results show that the proposed approach yields competitive results compared to state-of-the-art object tracking methods on the testing datasets.

  3. NABIC: A New Access Portal to Search, Visualize, and Share Agricultural Genomics Data

    PubMed Central

    Seol, Young-Joo; Lee, Tae-Ho; Park, Dong-Suk; Kim, Chang-Kug

    2016-01-01

    The National Agricultural Biotechnology Information Center developed an access portal to search, visualize, and share agricultural genomics data with a focus on South Korean information and resources. The portal features an agricultural biotechnology database containing a wide range of omics data from public and proprietary sources. We collected 28.4 TB of data from 162 agricultural organisms, with 10 types of omics data comprising next-generation sequencing sequence read archive, genome, gene, nucleotide, DNA chip, expressed sequence tag, interactome, protein structure, molecular marker, and single-nucleotide polymorphism datasets. Our genomic resources contain information on five animals, seven plants, and one fungus, which is accessed through a genome browser. We also developed a data submission and analysis system as a web service, with easy-to-use functions and cutting-edge algorithms, including those for handling next-generation sequencing data. PMID:26848255

  4. The perception of visual images encoded in musical form: a study in cross-modality information transfer.

    PubMed Central

    Cronly-Dillon, J; Persaud, K; Gregory, R P

    1999-01-01

    This study demonstrates the ability of blind (previously sighted) and blindfolded (sighted) subjects in reconstructing and identifying a number of visual targets transformed into equivalent musical representations. Visual images are deconstructed through a process which selectively segregates different features of the image into separate packages. These are then encoded in sound and presented as a polyphonic musical melody which resembles a Baroque fugue with many voices, allowing subjects to analyse the component voices selectively in combination, or separately in sequence, in a manner which allows a subject to patch together and bind the different features of the object mentally into a mental percept of a single recognizable entity. The visual targets used in this study included a variety of geometrical figures, simple high-contrast line drawings of man-made objects, natural and urban scenes, etc., translated into sound and presented to the subject in polyphonic musical form. PMID:10643086

  5. Using VMD - An Introductory Tutorial

    PubMed Central

    Hsin, Jen; Arkhipov, Anton; Yin, Ying; Stone, John E.; Schulten, Klaus

    2010-01-01

    VMD (Visual Molecular Dynamics) is a molecular visualization and analysis program designed for biological systems such as proteins, nucleic acids, lipid bilayer assemblies, etc. This unit will serve as an introductory VMD tutorial. We will present several step-by-step examples of some of VMD’s most popular features, including visualizing molecules in three dimensions with different drawing and coloring methods, rendering publication-quality figures, animate and analyze the trajectory of a molecular dynamics simulation, scripting in the text-based Tcl/Tk interface, and analyzing both sequence and structure data for proteins. PMID:19085979

  6. Kablammo: an interactive, web-based BLAST results visualizer.

    PubMed

    Wintersinger, Jeff A; Wasmuth, James D

    2015-04-15

    Kablammo is a web-based application that produces interactive, vector-based visualizations of sequence alignments generated by BLAST. These visualizations can illustrate many features, including shared protein domains, chromosome structural modifications and genome misassembly. Kablammo can be used at http://kablammo.wasmuthlab.org. For a local installation, the source code and instructions are available under the MIT license at http://github.com/jwintersinger/kablammo. jeff@wintersinger.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. CRAWview: for viewing splicing variation, gene families, and polymorphism in clusters of ESTs and full-length sequences.

    PubMed

    Chou, A; Burke, J

    1999-05-01

    DNA sequence clustering has become a valuable method in support of gene discovery and gene expression analysis. Our interest lies in leveraging the sequence diversity within clusters of expressed sequence tags (ESTs) to model gene structure for the study of gene variants that arise from, among other things, alternative mRNA splicing, polymorphism, and divergence after gene duplication, fusion, and translocation events. In previous work, CRAW was developed to discover gene variants from assembled clusters of ESTs. Most importantly, novel gene features (the differing units between gene variants, for example alternative exons, polymorphisms, transposable elements, etc.) that are specialized to tissue, disease, population, or developmental states can be identified when these tools collate DNA source information with gene variant discrimination. While the goal is complete automation of novel feature and gene variant detection, current methods are far from perfect and hence the development of effective tools for visualization and exploratory data analysis are of paramount importance in the process of sifting through candidate genes and validating targets. We present CRAWview, a Java based visualization extension to CRAW. Features that vary between gene forms are displayed using an automatically generated color coded index. The reporting format of CRAWview gives a brief, high level summary report to display overlap and divergence within clusters of sequences as well as the ability to 'drill down' and see detailed information concerning regions of interest. Additionally, the alignment viewing and editing capabilities of CRAWview make it possible to interactively correct frame-shifts and otherwise edit cluster assemblies. We have implemented CRAWview as a Java application across windows NT/95 and UNIX platforms. A beta version of CRAWview will be freely available to academic users from Pangea Systems (http://www.pangeasystems.com). Contact :

  8. Precision of working memory for visual motion sequences and transparent motion surfaces

    PubMed Central

    Zokaei, Nahid; Gorgoraptis, Nikos; Bahrami, Bahador; Bays, Paul M; Husain, Masud

    2012-01-01

    Recent studies investigating working memory for location, colour and orientation support a dynamic resource model. We examined whether this might also apply to motion, using random dot kinematograms (RDKs) presented sequentially or simultaneously. Mean precision for motion direction declined as sequence length increased, with precision being lower for earlier RDKs. Two alternative models of working memory were compared specifically to distinguish between the contributions of different sources of error that corrupt memory (Zhang & Luck (2008) vs. Bays et al (2009)). The latter provided a significantly better fit for the data, revealing that decrease in memory precision for earlier items is explained by an increase in interference from other items in a sequence, rather than random guessing or a temporal decay of information. Misbinding feature attributes is an important source of error in working memory. Precision of memory for motion direction decreased when two RDKs were presented simultaneously as transparent surfaces, compared to sequential RDKs. However, precision was enhanced when one motion surface was prioritized, demonstrating that selective attention can improve recall precision. These results are consistent with a resource model that can be used as a general conceptual framework for understanding working memory across a range of visual features. PMID:22135378

  9. Online tracking of outdoor lighting variations for augmented reality with moving cameras.

    PubMed

    Liu, Yanli; Granier, Xavier

    2012-04-01

    In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.

  10. ReadXplorer—visualization and analysis of mapped sequences

    PubMed Central

    Hilker, Rolf; Stadermann, Kai Bernd; Doppmeier, Daniel; Kalinowski, Jörn; Stoye, Jens; Straube, Jasmin; Winnebald, Jörn; Goesmann, Alexander

    2014-01-01

    Motivation: Fast algorithms and well-arranged visualizations are required for the comprehensive analysis of the ever-growing size of genomic and transcriptomic next-generation sequencing data. Results: ReadXplorer is a software offering straightforward visualization and extensive analysis functions for genomic and transcriptomic DNA sequences mapped on a reference. A unique specialty of ReadXplorer is the quality classification of the read mappings. It is incorporated in all analysis functions and displayed in ReadXplorer's various synchronized data viewers for (i) the reference sequence, its base coverage as (ii) normalizable plot and (iii) histogram, (iv) read alignments and (v) read pairs. ReadXplorer's analysis capability covers RNA secondary structure prediction, single nucleotide polymorphism and deletion–insertion polymorphism detection, genomic feature and general coverage analysis. Especially for RNA-Seq data, it offers differential gene expression analysis, transcription start site and operon detection as well as RPKM value and read count calculations. Furthermore, ReadXplorer can combine or superimpose coverage of different datasets. Availability and implementation: ReadXplorer is available as open-source software at http://www.readxplorer.org along with a detailed manual. Contact: rhilker@mikrobio.med.uni-giessen.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24790157

  11. The grammar of visual narrative: Neural evidence for constituent structure in sequential image comprehension.

    PubMed

    Cohn, Neil; Jackendoff, Ray; Holcomb, Phillip J; Kuperberg, Gina R

    2014-11-01

    Constituent structure has long been established as a central feature of human language. Analogous to how syntax organizes words in sentences, a narrative grammar organizes sequential images into hierarchic constituents. Here we show that the brain draws upon this constituent structure to comprehend wordless visual narratives. We recorded neural responses as participants viewed sequences of visual images (comics strips) in which blank images either disrupted individual narrative constituents or fell at natural constituent boundaries. A disruption of either the first or the second narrative constituent produced a left-lateralized anterior negativity effect between 500 and 700ms. Disruption of the second constituent also elicited a posteriorly-distributed positivity (P600) effect. These neural responses are similar to those associated with structural violations in language and music. These findings provide evidence that comprehenders use a narrative structure to comprehend visual sequences and that the brain engages similar neurocognitive mechanisms to build structure across multiple domains. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. The grammar of visual narrative: Neural evidence for constituent structure in sequential image comprehension

    PubMed Central

    Cohn, Neil; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.

    2014-01-01

    Constituent structure has long been established as a central feature of human language. Analogous to how syntax organizes words in sentences, a narrative grammar organizes sequential images into hierarchic constituents. Here we show that the brain draws upon this constituent structure to comprehend wordless visual narratives. We recorded neural responses as participants viewed sequences of visual images (comics strips) in which blank images either disrupted individual narrative constituents or fell at natural constituent boundaries. A disruption of either the first or the second narrative constituent produced a left-lateralized anterior negativity effect between 500-700ms. Disruption of the second constituent also elicited a posteriorly-distributed positivity (P600) effect. These neural responses are similar to those associated with structural violations in language and music. These findings provide evidence that comprehenders use a narrative structure to comprehend visual sequences and that the brain engages similar neurocognitive mechanisms to build structure across multiple domains. PMID:25241329

  13. chimeraviz: a tool for visualizing chimeric RNA.

    PubMed

    Lågstad, Stian; Zhao, Sen; Hoff, Andreas M; Johannessen, Bjarne; Lingjærde, Ole Christian; Skotheim, Rolf I

    2017-09-15

    Advances in high-throughput RNA sequencing have enabled more efficient detection of fusion transcripts, but the technology and associated software used for fusion detection from sequencing data often yield a high false discovery rate. Good prioritization of the results is important, and this can be helped by a visualization framework that automatically integrates RNA data with known genomic features. Here we present chimeraviz , a Bioconductor package that automates the creation of chimeric RNA visualizations. The package supports input from nine different fusion-finder tools: deFuse, EricScript, InFusion, JAFFA, FusionCatcher, FusionMap, PRADA, SOAPfuse and STAR-FUSION. chimeraviz is an R package available via Bioconductor ( https://bioconductor.org/packages/release/bioc/html/chimeraviz.html ) under Artistic-2.0. Source code and support is available at GitHub ( https://github.com/stianlagstad/chimeraviz ). rolf.i.skotheim@rr-research.no. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  14. Personal sleep pattern visualization using sequence-based kernel self-organizing map on sound data.

    PubMed

    Wu, Hongle; Kato, Takafumi; Yamada, Tomomi; Numao, Masayuki; Fukui, Ken-Ichi

    2017-07-01

    We propose a method to discover sleep patterns via clustering of sound events recorded during sleep. The proposed method extends the conventional self-organizing map algorithm by kernelization and sequence-based technologies to obtain a fine-grained map that visualizes the distribution and changes of sleep-related events. We introduced features widely applied in sound processing and popular kernel functions to the proposed method to evaluate and compare performance. The proposed method provides a new aspect of sleep monitoring because the results demonstrate that sound events can be directly correlated to an individual's sleep patterns. In addition, by visualizing the transition of cluster dynamics, sleep-related sound events were found to relate to the various stages of sleep. Therefore, these results empirically warrant future study into the assessment of personal sleep quality using sound data. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Primary Visual Cortex Represents the Difference Between Past and Present

    PubMed Central

    Nortmann, Nora; Rekauzke, Sascha; Onat, Selim; König, Peter; Jancke, Dirk

    2015-01-01

    The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream. PMID:24343889

  16. Stabilized display of coronary x-ray image sequences

    NASA Astrophysics Data System (ADS)

    Close, Robert A.; Whiting, James S.; Da, Xiaolin; Eigler, Neal L.

    2004-05-01

    Display stabilization is a technique by which a feature of interest in a cine image sequence is tracked and then shifted to remain approximately stationary on the display device. Prior simulations indicate that display stabilization with high playback rates ( 30 f/s) can significantly improve detectability of low-contrast features in coronary angiograms. Display stabilization may also help to improve the accuracy of intra-coronary device placement. We validated our automated tracking algorithm by comparing the inter-frame difference (jitter) between manual and automated tracking of 150 coronary x-ray image sequences acquired on a digital cardiovascular X-ray imaging system with CsI/a-Si flat panel detector. We find that the median (50%) inter-frame jitter between manual and automatic tracking is 1.41 pixels or less, indicating a jump no further than an adjacent pixel. This small jitter implies that automated tracking and manual tracking should yield similar improvements in the performance of most visual tasks. We hypothesize that cardiologists would perceive a benefit in viewing the stabilized display as an addition to the standard playback of cine recordings. A benefit of display stabilization was identified in 87 of 101 sequences (86%). The most common tasks cited were evaluation of stenosis and determination of stent and balloon positions. We conclude that display stabilization offers perceptible improvements in the performance of visual tasks by cardiologists.

  17. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  18. 3DScapeCS: application of three dimensional, parallel, dynamic network visualization in Cytoscape

    PubMed Central

    2013-01-01

    Background The exponential growth of gigantic biological data from various sources, such as protein-protein interaction (PPI), genome sequences scaffolding, Mass spectrometry (MS) molecular networking and metabolic flux, demands an efficient way for better visualization and interpretation beyond the conventional, two-dimensional visualization tools. Results We developed a 3D Cytoscape Client/Server (3DScapeCS) plugin, which adopted Cytoscape in interpreting different types of data, and UbiGraph for three-dimensional visualization. The extra dimension is useful in accommodating, visualizing, and distinguishing large-scale networks with multiple crossed connections in five case studies. Conclusions Evaluation on several experimental data using 3DScapeCS and its special features, including multilevel graph layout, time-course data animation, and parallel visualization has proven its usefulness in visualizing complex data and help to make insightful conclusions. PMID:24225050

  19. Visualising Learning Design in LAMS: A Historical View

    ERIC Educational Resources Information Center

    Dalziel, James

    2011-01-01

    The Learning Activity Management System (LAMS) provides a web-based environment for the creation, sharing, running and monitoring of Learning Designs. A central feature of LAMS is the visual authoring environment, where educators use a drag-and-drop environment to create sequences of learning activities. The visualisation is based on boxes…

  20. Learning and Visualizing Modulation Discriminative Radio Signal Features

    DTIC Science & Technology

    2016-09-01

    implemented as a mapping of a sequence of in-phase quadrature ( IQ ) measurements generated by a software-defined radio to a probability distri- bution...over modulation classes. 3.1 TRAINING SNR EVALUATION Training CNNs on RF data raises the unique question of determining an optimal training SNR, that

  1. MSAViewer: interactive JavaScript visualization of multiple sequence alignments.

    PubMed

    Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E; Rost, Burkhard; Goldberg, Tatyana

    2016-11-15

    The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is 'web ready': written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/Supplementary information: Supplementary data are available at Bioinformatics online. msa@bio.sh. © The Author 2016. Published by Oxford University Press.

  2. MSAViewer: interactive JavaScript visualization of multiple sequence alignments

    PubMed Central

    Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E.; Rost, Burkhard; Goldberg, Tatyana

    2016-01-01

    Summary: The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is ‘web ready’: written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. Availability and Implementation: The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: msa@bio.sh PMID:27412096

  3. Using cellular automata to generate image representation for biological sequences.

    PubMed

    Xiao, X; Shao, S; Ding, Y; Huang, Z; Chen, X; Chou, K-C

    2005-02-01

    A novel approach to visualize biological sequences is developed based on cellular automata (Wolfram, S. Nature 1984, 311, 419-424), a set of discrete dynamical systems in which space and time are discrete. By transforming the symbolic sequence codes into the digital codes, and using some optimal space-time evolvement rules of cellular automata, a biological sequence can be represented by a unique image, the so-called cellular automata image. Many important features, which are originally hidden in a long and complicated biological sequence, can be clearly revealed thru its cellular automata image. With biological sequences entering into databanks rapidly increasing in the post-genomic era, it is anticipated that the cellular automata image will become a very useful vehicle for investigation into their key features, identification of their function, as well as revelation of their "fingerprint". It is anticipated that by using the concept of the pseudo amino acid composition (Chou, K.C. Proteins: Structure, Function, and Genetics, 2001, 43, 246-255), the cellular automata image approach can also be used to improve the quality of predicting protein attributes, such as structural class and subcellular location.

  4. A Simple Model-Based Approach to Inferring and Visualizing Cancer Mutation Signatures

    PubMed Central

    Shiraishi, Yuichi; Tremmel, Georg; Miyano, Satoru; Stephens, Matthew

    2015-01-01

    Recent advances in sequencing technologies have enabled the production of massive amounts of data on somatic mutations from cancer genomes. These data have led to the detection of characteristic patterns of somatic mutations or “mutation signatures” at an unprecedented resolution, with the potential for new insights into the causes and mechanisms of tumorigenesis. Here we present new methods for modelling, identifying and visualizing such mutation signatures. Our methods greatly simplify mutation signature models compared with existing approaches, reducing the number of parameters by orders of magnitude even while increasing the contextual factors (e.g. the number of flanking bases) that are accounted for. This improves both sensitivity and robustness of inferred signatures. We also provide a new intuitive way to visualize the signatures, analogous to the use of sequence logos to visualize transcription factor binding sites. We illustrate our new method on somatic mutation data from urothelial carcinoma of the upper urinary tract, and a larger dataset from 30 diverse cancer types. The results illustrate several important features of our methods, including the ability of our new visualization tool to clearly highlight the key features of each signature, the improved robustness of signature inferences from small sample sizes, and more detailed inference of signature characteristics such as strand biases and sequence context effects at the base two positions 5′ to the mutated site. The overall framework of our work is based on probabilistic models that are closely connected with “mixed-membership models” which are widely used in population genetic admixture analysis, and in machine learning for document clustering. We argue that recognizing these relationships should help improve understanding of mutation signature extraction problems, and suggests ways to further improve the statistical methods. Our methods are implemented in an R package pmsignature (https://github.com/friend1ws/pmsignature) and a web application available at https://friend1ws.shinyapps.io/pmsignature_shiny/. PMID:26630308

  5. Enhancing links between visual short term memory, visual attention and cognitive control processes through practice: An electrophysiological insight.

    PubMed

    Fuggetta, Giorgio; Duke, Philip A

    2017-05-01

    The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second negative posterior-contralateral (N2pc) component, mediating the process of orienting and focusing covert attention on peripheral target features. We discussed these three components as representing different neurocognitive systems modulated with practice within which the input selection process operates. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  6. A powerful graphical pulse sequence programming tool for magnetic resonance imaging.

    PubMed

    Jie, Shen; Ying, Liu; Jianqi, Li; Gengying, Li

    2005-12-01

    A powerful graphical pulse sequence programming tool has been designed for creating magnetic resonance imaging (MRI) applications. It allows rapid development of pulse sequences in graphical mode (allowing for the visualization of sequences), and consists of three modules which include a graphical sequence editor, a parameter management module and a sequence compiler. Its key features are ease to use, flexibility and hardware independence. When graphic elements are combined with a certain text expressions, the graphical pulse sequence programming is as flexible as text-based programming tool. In addition, a hardware-independent design is implemented by using the strategy of two step compilations. To demonstrate the flexibility and the capability of this graphical sequence programming tool, a multi-slice fast spin echo experiment is performed on our home-made 0.3 T permanent magnet MRI system.

  7. GATA: A graphic alignment tool for comparative sequenceanalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nix, David A.; Eisen, Michael B.

    2005-01-01

    Several problems exist with current methods used to align DNA sequences for comparative sequence analysis. Most dynamic programming algorithms assume that conserved sequence elements are collinear. This assumption appears valid when comparing orthologous protein coding sequences. Functional constraints on proteins provide strong selective pressure against sequence inversions, and minimize sequence duplications and feature shuffling. For non-coding sequences this collinearity assumption is often invalid. For example, enhancers contain clusters of transcription factor binding sites that change in number, orientation, and spacing during evolution yet the enhancer retains its activity. Dotplot analysis is often used to estimate non-coding sequence relatedness. Yet dotmore » plots do not actually align sequences and thus cannot account well for base insertions or deletions. Moreover, they lack an adequate statistical framework for comparing sequence relatedness and are limited to pairwise comparisons. Lastly, dot plots and dynamic programming text outputs fail to provide an intuitive means for visualizing DNA alignments.« less

  8. Precision of working memory for visual motion sequences and transparent motion surfaces.

    PubMed

    Zokaei, Nahid; Gorgoraptis, Nikos; Bahrami, Bahador; Bays, Paul M; Husain, Masud

    2011-12-01

    Recent studies investigating working memory for location, color, and orientation support a dynamic resource model. We examined whether this might also apply to motion, using random dot kinematograms (RDKs) presented sequentially or simultaneously. Mean precision for motion direction declined as sequence length increased, with precision being lower for earlier RDKs. Two alternative models of working memory were compared specifically to distinguish between the contributions of different sources of error that corrupt memory (W. Zhang & S. J. Luck, 2008 vs. P. M. Bays, R. F. G. Catalao, & M. Husain, 2009). The latter provided a significantly better fit for the data, revealing that decrease in memory precision for earlier items is explained by an increase in interference from other items in a sequence rather than random guessing or a temporal decay of information. Misbinding feature attributes is an important source of error in working memory. Precision of memory for motion direction decreased when two RDKs were presented simultaneously as transparent surfaces, compared to sequential RDKs. However, precision was enhanced when one motion surface was prioritized, demonstrating that selective attention can improve recall precision. These results are consistent with a resource model that can be used as a general conceptual framework for understanding working memory across a range of visual features.

  9. A visual tracking method based on deep learning without online model updating

    NASA Astrophysics Data System (ADS)

    Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei

    2018-02-01

    The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.

  10. Comparative analysis and visualization of multiple collinear genomes

    PubMed Central

    2012-01-01

    Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897

  11. Predicting DNA binding proteins using support vector machine with hybrid fractal features.

    PubMed

    Niu, Xiao-Hui; Hu, Xue-Hai; Shi, Feng; Xia, Jing-Bo

    2014-02-21

    DNA-binding proteins play a vitally important role in many biological processes. Prediction of DNA-binding proteins from amino acid sequence is a significant but not fairly resolved scientific problem. Chaos game representation (CGR) investigates the patterns hidden in protein sequences, and visually reveals previously unknown structure. Fractal dimensions (FD) are good tools to measure sizes of complex, highly irregular geometric objects. In order to extract the intrinsic correlation with DNA-binding property from protein sequences, CGR algorithm, fractal dimension and amino acid composition are applied to formulate the numerical features of protein samples in this paper. Seven groups of features are extracted, which can be computed directly from the primary sequence, and each group is evaluated by the 10-fold cross-validation test and Jackknife test. Comparing the results of numerical experiments, the group of amino acid composition and fractal dimension (21-dimension vector) gets the best result, the average accuracy is 81.82% and average Matthew's correlation coefficient (MCC) is 0.6017. This resulting predictor is also compared with existing method DNA-Prot and shows better performances. © 2013 The Authors. Published by Elsevier Ltd All rights reserved.

  12. In Vivo Visualization of Alzheimer’s Amyloid Plaques by MRI in Transgenic Mice Without a Contrast Agent

    PubMed Central

    Jack, Clifford R.; Garwood, Michael; Wengenack, Thomas M.; Borowski, Bret; Curran, Geoffrey L.; Lin, Joseph; Adriany, Gregor; Grohn, Olli H.J.; Grimm, Roger; Poduslo, Joseph F.

    2009-01-01

    One of the cardinal pathologic features of Alzheimer’s disease (AD) is formation of senile, or amyloid, plaques. Transgenic mice have been developed that express one or more of the genes responsible for familial AD in humans. Doubly transgenic mice develop “human-like” plaques, providing a mechanism to study amyloid plaque biology in a controlled manner. Imaging of labeled plaques has been accomplished with other modalities, but only MRI has sufficient spatial and contrast resolution to visualize individual plaques non-invasively. Methods to optimize visualization of plaques in vivo in transgenic mice at 9.4 T using a spin echo sequence based on adiabatic pulses are described. Preliminary results indicate that a spin echo acquisition more accurately reflects plaque size, while a T2* weighted gradient echo sequence reflects plaque iron content not plaque size. In vivo MRI – ex vivo MRI – in vitro histological correlations are provided. Histologically verified plaques as small as 50 μm in diameter were visualized in the living animal. To our knowledge this work represents the first demonstration of non-invasive in vivo visualization of individual AD plaques without the use of a contrast agent. PMID:15562496

  13. The impact of visual sequencing of graphic symbols on the sentence construction output of children who have acquired language.

    PubMed

    Alant, Erna; du Plooy, Amelia; Dada, Shakila

    2007-01-01

    Although the sequence of graphic or pictorial symbols displayed on a communication board can have an impact on the language output of children, very little research has been conducted to describe this. Research in this area is particularly relevant for prioritising the importance of specific visual and graphic features in providing more effective and user-friendly access to communication boards. This study is concerned with understanding the impact ofspecific sequences of graphic symbol input on the graphic and spoken output of children who have acquired language. Forty participants were divided into two comparable groups. Each group was exposed to graphic symbol input with a certain word order sequence. The structure of input was either in typical English word order sequence Subject- Verb-Object (SVO) or in the word order sequence of Subject-Object-Verb (SOV). Both input groups had to answer six questions by using graphic output as well as speech. The findings indicated that there are significant differences in the PCS graphic output patterns of children who are exposed to graphic input in the SOV and SVO sequences. Furthermore, the output produced in the graphic mode differed considerably to the output produced in the spoken mode. Clinical implications of these findings are discussed

  14. GeneWiz browser: An Interactive Tool for Visualizing Sequenced Chromosomes.

    PubMed

    Hallin, Peter F; Stærfeldt, Hans-Henrik; Rotenberg, Eva; Binnewies, Tim T; Benham, Craig J; Ussery, David W

    2009-09-25

    We present an interactive web application for visualizing genomic data of prokaryotic chromosomes. The tool (GeneWiz browser) allows users to carry out various analyses such as mapping alignments of homologous genes to other genomes, mapping of short sequencing reads to a reference chromosome, and calculating DNA properties such as curvature or stacking energy along the chromosome. The GeneWiz browser produces an interactive graphic that enables zooming from a global scale down to single nucleotides, without changing the size of the plot. Its ability to disproportionally zoom provides optimal readability and increased functionality compared to other browsers. The tool allows the user to select the display of various genomic features, color setting and data ranges. Custom numerical data can be added to the plot allowing, for example, visualization of gene expression and regulation data. Further, standard atlases are pre-generated for all prokaryotic genomes available in GenBank, providing a fast overview of all available genomes, including recently deposited genome sequences. The tool is available online from http://www.cbs.dtu.dk/services/gwBrowser. Supplemental material including interactive atlases is available online at http://www.cbs.dtu.dk/services/gwBrowser/suppl/.

  15. Similarity as an organising principle in short-term memory.

    PubMed

    LeCompte, D C; Watkins, M J

    1993-03-01

    The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the 'target' item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.

  16. The Emergence of Visual Awareness: Temporal Dynamics in Relation to Task and Mask Type

    PubMed Central

    Kiefer, Markus; Kammer, Thomas

    2017-01-01

    One aspect of consciousness phenomena, the temporal emergence of visual awareness, has been subject of a controversial debate. How can visual awareness, that is the experiential quality of visual stimuli, be characterized best? Is there a sharp discontinuous or dichotomous transition between unaware and fully aware states, or does awareness emerge gradually encompassing intermediate states? Previous studies yielded conflicting results and supported both dichotomous and gradual views. It is well conceivable that these conflicting results are more than noise, but reflect the dynamic nature of the temporal emergence of visual awareness. Using a psychophysical approach, the present research tested whether the emergence of visual awareness is context-dependent with a temporal two-alternative forced choice task. During backward masking of word targets, it was assessed whether the relative temporal sequence of stimulus thresholds is modulated by the task (stimulus presence, letter case, lexical decision, and semantic category) and by mask type. Four masks with different similarity to the target features were created. Psychophysical functions were then fitted to the accuracy data in the different task conditions as a function of the stimulus mask SOA in order to determine the inflection point (conscious threshold of each feature) and slope of the psychophysical function (transition from unaware to aware within each feature). Depending on feature-mask similarity, thresholds in the different tasks were highly dispersed suggesting a graded transition from unawareness to awareness or had less differentiated thresholds indicating that clusters of features probed by the tasks quite simultaneously contribute to the percept. The latter observation, although not compatible with the notion of a sharp all-or-none transition between unaware and aware states, suggests a less gradual or more discontinuous emergence of awareness. Analyses of slopes of the fitted psychophysical functions also indicated that the emergence of awareness of single features is variable and might be influenced by the continuity of the feature dimensions. The present work thus suggests that the emergence of awareness is neither purely gradual nor dichotomous, but highly dynamic depending on the task and mask type. PMID:28316583

  17. Gee Fu: a sequence version and web-services database tool for genomic assembly, genome feature and NGS data.

    PubMed

    Ramirez-Gonzalez, Ricardo; Caccamo, Mario; MacLean, Daniel

    2011-10-01

    Scientists now use high-throughput sequencing technologies and short-read assembly methods to create draft genome assemblies in just days. Tools and pipelines like the assembler, and the workflow management environments make it easy for a non-specialist to implement complicated pipelines to produce genome assemblies and annotations very quickly. Such accessibility results in a proliferation of assemblies and associated files, often for many organisms. These assemblies get used as a working reference by lots of different workers, from a bioinformatician doing gene prediction or a bench scientist designing primers for PCR. Here we describe Gee Fu, a database tool for genomic assembly and feature data, including next-generation sequence alignments. Gee Fu is an instance of a Ruby-On-Rails web application on a feature database that provides web and console interfaces for input, visualization of feature data via AnnoJ, access to data through a web-service interface, an API for direct data access by Ruby scripts and access to feature data stored in BAM files. Gee Fu provides a platform for storing and sharing different versions of an assembly and associated features that can be accessed and updated by bench biologists and bioinformaticians in ways that are easy and useful for each. http://tinyurl.com/geefu dan.maclean@tsl.ac.uk.

  18. Using GBrowse 2.0 to visualize and share next-generation sequence data

    PubMed Central

    2013-01-01

    GBrowse is a mature web-based genome browser that is suitable for deployment on both public and private web sites. It supports most of genome browser features, including qualitative and quantitative (wiggle) tracks, track uploading, track sharing, interactive track configuration, semantic zooming and limited smooth track panning. As of version 2.0, GBrowse supports next-generation sequencing (NGS) data by providing for the direct display of SAM and BAM sequence alignment files. SAM/BAM tracks provide semantic zooming and support both local and remote data sources. This article provides step-by-step instructions for configuring GBrowse to display NGS data. PMID:23376193

  19. SNP-VISTA: An interactive SNP visualization tool

    PubMed Central

    Shah, Nameeta; Teplitsky, Michael V; Minovitsky, Simon; Pennacchio, Len A; Hugenholtz, Philip; Hamann, Bernd; Dubchak, Inna L

    2005-01-01

    Background Recent advances in sequencing technologies promise to provide a better understanding of the genetics of human disease as well as the evolution of microbial populations. Single Nucleotide Polymorphisms (SNPs) are established genetic markers that aid in the identification of loci affecting quantitative traits and/or disease in a wide variety of eukaryotic species. With today's technological capabilities, it has become possible to re-sequence a large set of appropriate candidate genes in individuals with a given disease in an attempt to identify causative mutations. In addition, SNPs have been used extensively in efforts to study the evolution of microbial populations, and the recent application of random shotgun sequencing to environmental samples enables more extensive SNP analysis of co-occurring and co-evolving microbial populations. The program is available at [1]. Results We have developed and present two modifications of an interactive visualization tool, SNP-VISTA, to aid in the analyses of the following types of data: A. Large-scale re-sequence data of disease-related genes for discovery of associated and/or causative alleles (GeneSNP-VISTA). B. Massive amounts of ecogenomics data for studying homologous recombination in microbial populations (EcoSNP-VISTA). The main features and capabilities of SNP-VISTA are: 1) mapping of SNPs to gene structure; 2) classification of SNPs, based on their location in the gene, frequency of occurrence in samples and allele composition; 3) clustering, based on user-defined subsets of SNPs, highlighting haplotypes as well as recombinant sequences; 4) integration of protein evolutionary conservation visualization; and 5) display of automatically calculated recombination points that are user-editable. Conclusion The main strength of SNP-VISTA is its graphical interface and use of visual representations, which support interactive exploration and hence better understanding of large-scale SNP data by the user. PMID:16336665

  20. Eye movements and attention: The role of pre-saccadic shifts of attention in perception, memory and the control of saccades

    PubMed Central

    Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen

    2012-01-01

    Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798

  1. dictyExpress: a web-based platform for sequence data management and analytics in Dictyostelium and beyond.

    PubMed

    Stajdohar, Miha; Rosengarten, Rafael D; Kokosar, Janez; Jeran, Luka; Blenkus, Domen; Shaulsky, Gad; Zupan, Blaz

    2017-06-02

    Dictyostelium discoideum, a soil-dwelling social amoeba, is a model for the study of numerous biological processes. Research in the field has benefited mightily from the adoption of next-generation sequencing for genomics and transcriptomics. Dictyostelium biologists now face the widespread challenges of analyzing and exploring high dimensional data sets to generate hypotheses and discovering novel insights. We present dictyExpress (2.0), a web application designed for exploratory analysis of gene expression data, as well as data from related experiments such as Chromatin Immunoprecipitation sequencing (ChIP-Seq). The application features visualization modules that include time course expression profiles, clustering, gene ontology enrichment analysis, differential expression analysis and comparison of experiments. All visualizations are interactive and interconnected, such that the selection of genes in one module propagates instantly to visualizations in other modules. dictyExpress currently stores the data from over 800 Dictyostelium experiments and is embedded within a general-purpose software framework for management of next-generation sequencing data. dictyExpress allows users to explore their data in a broader context by reciprocal linking with dictyBase-a repository of Dictyostelium genomic data. In addition, we introduce a companion application called GenBoard, an intuitive graphic user interface for data management and bioinformatics analysis. dictyExpress and GenBoard enable broad adoption of next generation sequencing based inquiries by the Dictyostelium research community. Labs without the means to undertake deep sequencing projects can mine the data available to the public. The entire information flow, from raw sequence data to hypothesis testing, can be accomplished in an efficient workspace. The software framework is generalizable and represents a useful approach for any research community. To encourage more wide usage, the backend is open-source, available for extension and further development by bioinformaticians and data scientists.

  2. TRX-LOGOS - a graphical tool to demonstrate DNA information content dependent upon backbone dynamics in addition to base sequence.

    PubMed

    Fortin, Connor H; Schulze, Katharina V; Babbitt, Gregory A

    2015-01-01

    It is now widely-accepted that DNA sequences defining DNA-protein interactions functionally depend upon local biophysical features of DNA backbone that are important in defining sites of binding interaction in the genome (e.g. DNA shape, charge and intrinsic dynamics). However, these physical features of DNA polymer are not directly apparent when analyzing and viewing Shannon information content calculated at single nucleobases in a traditional sequence logo plot. Thus, sequence logos plots are severely limited in that they convey no explicit information regarding the structural dynamics of DNA backbone, a feature often critical to binding specificity. We present TRX-LOGOS, an R software package and Perl wrapper code that interfaces the JASPAR database for computational regulatory genomics. TRX-LOGOS extends the traditional sequence logo plot to include Shannon information content calculated with regard to the dinucleotide-based BI-BII conformation shifts in phosphate linkages on the DNA backbone, thereby adding a visual measure of intrinsic DNA flexibility that can be critical for many DNA-protein interactions. TRX-LOGOS is available as an R graphics module offered at both SourceForge and as a download supplement at this journal. To demonstrate the general utility of TRX logo plots, we first calculated the information content for 416 Saccharomyces cerevisiae transcription factor binding sites functionally confirmed in the Yeastract database and matched to previously published yeast genomic alignments. We discovered that flanking regions contain significantly elevated information content at phosphate linkages than can be observed at nucleobases. We also examined broader transcription factor classifications defined by the JASPAR database, and discovered that many general signatures of transcription factor binding are locally more information rich at the level of DNA backbone dynamics than nucleobase sequence. We used TRX-logos in combination with MEGA 6.0 software for molecular evolutionary genetics analysis to visually compare the human Forkhead box/FOX protein evolution to its binding site evolution. We also compared the DNA binding signatures of human TP53 tumor suppressor determined by two different laboratory methods (SELEX and ChIP-seq). Further analysis of the entire yeast genome, center aligned at the start codon, also revealed a distinct sequence-independent 3 bp periodic pattern in information content, present only in coding region, and perhaps indicative of the non-random organization of the genetic code. TRX-LOGOS is useful in any situation in which important information content in DNA can be better visualized at the positions of phosphate linkages (i.e. dinucleotides) where the dynamic properties of the DNA backbone functions to facilitate DNA-protein interaction.

  3. iSeq: Web-Based RNA-seq Data Analysis and Visualization.

    PubMed

    Zhang, Chao; Fan, Caoqi; Gan, Jingbo; Zhu, Ping; Kong, Lei; Li, Cheng

    2018-01-01

    Transcriptome sequencing (RNA-seq) is becoming a standard experimental methodology for genome-wide characterization and quantification of transcripts at single base-pair resolution. However, downstream analysis of massive amount of sequencing data can be prohibitively technical for wet-lab researchers. A functionally integrated and user-friendly platform is required to meet this demand. Here, we present iSeq, an R-based Web server, for RNA-seq data analysis and visualization. iSeq is a streamlined Web-based R application under the Shiny framework, featuring a simple user interface and multiple data analysis modules. Users without programming and statistical skills can analyze their RNA-seq data and construct publication-level graphs through a standardized yet customizable analytical pipeline. iSeq is accessible via Web browsers on any operating system at http://iseq.cbi.pku.edu.cn .

  4. The UCSC genome browser and associated tools

    PubMed Central

    Haussler, David; Kent, W. James

    2013-01-01

    The UCSC Genome Browser (http://genome.ucsc.edu) is a graphical viewer for genomic data now in its 13th year. Since the early days of the Human Genome Project, it has presented an integrated view of genomic data of many kinds. Now home to assemblies for 58 organisms, the Browser presents visualization of annotations mapped to genomic coordinates. The ability to juxtapose annotations of many types facilitates inquiry-driven data mining. Gene predictions, mRNA alignments, epigenomic data from the ENCODE project, conservation scores from vertebrate whole-genome alignments and variation data may be viewed at any scale from a single base to an entire chromosome. The Browser also includes many other widely used tools, including BLAT, which is useful for alignments from high-throughput sequencing experiments. Private data uploaded as Custom Tracks and Data Hubs in many formats may be displayed alongside the rich compendium of precomputed data in the UCSC database. The Table Browser is a full-featured graphical interface, which allows querying, filtering and intersection of data tables. The Saved Session feature allows users to store and share customized views, enhancing the utility of the system for organizing multiple trains of thought. Binary Alignment/Map (BAM), Variant Call Format and the Personal Genome Single Nucleotide Polymorphisms (SNPs) data formats are useful for visualizing a large sequencing experiment (whole-genome or whole-exome), where the differences between the data set and the reference assembly may be displayed graphically. Support for high-throughput sequencing extends to compact, indexed data formats, such as BAM, bigBed and bigWig, allowing rapid visualization of large datasets from RNA-seq and ChIP-seq experiments via local hosting. PMID:22908213

  5. The UCSC genome browser and associated tools.

    PubMed

    Kuhn, Robert M; Haussler, David; Kent, W James

    2013-03-01

    The UCSC Genome Browser (http://genome.ucsc.edu) is a graphical viewer for genomic data now in its 13th year. Since the early days of the Human Genome Project, it has presented an integrated view of genomic data of many kinds. Now home to assemblies for 58 organisms, the Browser presents visualization of annotations mapped to genomic coordinates. The ability to juxtapose annotations of many types facilitates inquiry-driven data mining. Gene predictions, mRNA alignments, epigenomic data from the ENCODE project, conservation scores from vertebrate whole-genome alignments and variation data may be viewed at any scale from a single base to an entire chromosome. The Browser also includes many other widely used tools, including BLAT, which is useful for alignments from high-throughput sequencing experiments. Private data uploaded as Custom Tracks and Data Hubs in many formats may be displayed alongside the rich compendium of precomputed data in the UCSC database. The Table Browser is a full-featured graphical interface, which allows querying, filtering and intersection of data tables. The Saved Session feature allows users to store and share customized views, enhancing the utility of the system for organizing multiple trains of thought. Binary Alignment/Map (BAM), Variant Call Format and the Personal Genome Single Nucleotide Polymorphisms (SNPs) data formats are useful for visualizing a large sequencing experiment (whole-genome or whole-exome), where the differences between the data set and the reference assembly may be displayed graphically. Support for high-throughput sequencing extends to compact, indexed data formats, such as BAM, bigBed and bigWig, allowing rapid visualization of large datasets from RNA-seq and ChIP-seq experiments via local hosting.

  6. Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

    PubMed Central

    Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz

    2014-01-01

    The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948

  7. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    PubMed

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. The Generalization of Visuomotor Learning to Untrained Movements and Movement Sequences Based on Movement Vector and Goal Location Remapping

    PubMed Central

    Wu, Howard G.

    2013-01-01

    The planning of goal-directed movements is highly adaptable; however, the basic mechanisms underlying this adaptability are not well understood. Even the features of movement that drive adaptation are hotly debated, with some studies suggesting remapping of goal locations and others suggesting remapping of the movement vectors leading to goal locations. However, several previous motor learning studies and the multiplicity of the neural coding underlying visually guided reaching movements stand in contrast to this either/or debate on the modes of motor planning and adaptation. Here we hypothesize that, during visuomotor learning, the target location and movement vector of trained movements are separately remapped, and we propose a novel computational model for how motor plans based on these remappings are combined during the control of visually guided reaching in humans. To test this hypothesis, we designed a set of experimental manipulations that effectively dissociated the effects of remapping goal location and movement vector by examining the transfer of visuomotor adaptation to untrained movements and movement sequences throughout the workspace. The results reveal that (1) motor adaptation differentially remaps goal locations and movement vectors, and (2) separate motor plans based on these features are effectively averaged during motor execution. We then show that, without any free parameters, the computational model we developed for combining movement-vector-based and goal-location-based planning predicts nearly 90% of the variance in novel movement sequences, even when multiple attributes are simultaneously adapted, demonstrating for the first time the ability to predict how motor adaptation affects movement sequence planning. PMID:23804099

  9. Enabling interspecies epigenomic comparison with CEpBrowser.

    PubMed

    Cao, Xiaoyi; Zhong, Sheng

    2013-05-01

    We developed the Comparative Epigenome Browser (CEpBrowser) to allow the public to perform multi-species epigenomic analysis. The web-based CEpBrowser integrates, manages and visualizes sequencing-based epigenomic datasets. Five key features were developed to maximize the efficiency of interspecies epigenomic comparisons. CEpBrowser is a web application implemented with PHP, MySQL, C and Apache. URL: http://www.cepbrowser.org/.

  10. Learning invariance from natural images inspired by observations in the primary visual cortex.

    PubMed

    Teichmann, Michael; Wiltschut, Jan; Hamker, Fred

    2012-05-01

    The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.

  11. A novel brain-computer interface based on the rapid serial visual presentation paradigm.

    PubMed

    Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin

    2010-01-01

    Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.

  12. High-speed imaging of submerged jet: visualization analysis using proper orthogonality decomposition

    NASA Astrophysics Data System (ADS)

    Liu, Yingzheng; He, Chuangxin

    2016-11-01

    In the present study, the submerged jet at low Reynolds numbers was visualized using laser induced fluoresce and high-speed imaging in a water tank. Well-controlled calibration was made to determine linear dependency region of the fluoresce intensity on its concentration. Subsequently, the jet fluid issuing from a circular pipe was visualized using a high-speed camera. The animation sequence of the visualized jet flow field was supplied for the snapshot proper orthogonality decomposition (POD) analysis. Spatio-temporally varying structures superimposed in the unsteady fluid flow were identified, e.g., the axisymmetric mode and the helical mode, which were reflected from the dominant POD modes. The coefficients of the POD modes give strong indication of temporal and spectral features of the corresponding unsteady events. The reconstruction using the time-mean visualization and the selected POD modes was conducted to reveal the convective motion of the buried vortical structures. National Natural Science Foundation of China.

  13. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  14. SPAR: small RNA-seq portal for analysis of sequencing experiments.

    PubMed

    Kuksa, Pavel P; Amlie-Wolf, Alexandre; Katanic, Živadin; Valladares, Otto; Wang, Li-San; Leung, Yuk Yee

    2018-05-04

    The introduction of new high-throughput small RNA sequencing protocols that generate large-scale genomics datasets along with increasing evidence of the significant regulatory roles of small non-coding RNAs (sncRNAs) have highlighted the urgent need for tools to analyze and interpret large amounts of small RNA sequencing data. However, it remains challenging to systematically and comprehensively discover and characterize sncRNA genes and specifically-processed sncRNA products from these datasets. To fill this gap, we present Small RNA-seq Portal for Analysis of sequencing expeRiments (SPAR), a user-friendly web server for interactive processing, analysis, annotation and visualization of small RNA sequencing data. SPAR supports sequencing data generated from various experimental protocols, including smRNA-seq, short total RNA sequencing, microRNA-seq, and single-cell small RNA-seq. Additionally, SPAR includes publicly available reference sncRNA datasets from our DASHR database and from ENCODE across 185 human tissues and cell types to produce highly informative small RNA annotations across all major small RNA types and other features such as co-localization with various genomic features, precursor transcript cleavage patterns, and conservation. SPAR allows the user to compare the input experiment against reference ENCODE/DASHR datasets. SPAR currently supports analyses of human (hg19, hg38) and mouse (mm10) sequencing data. SPAR is freely available at https://www.lisanwanglab.org/SPAR.

  15. PredictProtein—an open resource for online prediction of protein structural and functional features

    PubMed Central

    Yachdav, Guy; Kloppmann, Edda; Kajan, Laszlo; Hecht, Maximilian; Goldberg, Tatyana; Hamp, Tobias; Hönigschmid, Peter; Schafferhans, Andrea; Roos, Manfred; Bernhofer, Michael; Richter, Lothar; Ashkenazy, Haim; Punta, Marco; Schlessinger, Avner; Bromberg, Yana; Schneider, Reinhard; Vriend, Gerrit; Sander, Chris; Ben-Tal, Nir; Rost, Burkhard

    2014-01-01

    PredictProtein is a meta-service for sequence analysis that has been predicting structural and functional features of proteins since 1992. Queried with a protein sequence it returns: multiple sequence alignments, predicted aspects of structure (secondary structure, solvent accessibility, transmembrane helices (TMSEG) and strands, coiled-coil regions, disulfide bonds and disordered regions) and function. The service incorporates analysis methods for the identification of functional regions (ConSurf), homology-based inference of Gene Ontology terms (metastudent), comprehensive subcellular localization prediction (LocTree3), protein–protein binding sites (ISIS2), protein–polynucleotide binding sites (SomeNA) and predictions of the effect of point mutations (non-synonymous SNPs) on protein function (SNAP2). Our goal has always been to develop a system optimized to meet the demands of experimentalists not highly experienced in bioinformatics. To this end, the PredictProtein results are presented as both text and a series of intuitive, interactive and visually appealing figures. The web server and sources are available at http://ppopen.rostlab.org. PMID:24799431

  16. Family genome browser: visualizing genomes with pedigree information.

    PubMed

    Juan, Liran; Liu, Yongzhuang; Wang, Yongtian; Teng, Mingxiang; Zang, Tianyi; Wang, Yadong

    2015-07-15

    Families with inherited diseases are widely used in Mendelian/complex disease studies. Owing to the advances in high-throughput sequencing technologies, family genome sequencing becomes more and more prevalent. Visualizing family genomes can greatly facilitate human genetics studies and personalized medicine. However, due to the complex genetic relationships and high similarities among genomes of consanguineous family members, family genomes are difficult to be visualized in traditional genome visualization framework. How to visualize the family genome variants and their functions with integrated pedigree information remains a critical challenge. We developed the Family Genome Browser (FGB) to provide comprehensive analysis and visualization for family genomes. The FGB can visualize family genomes in both individual level and variant level effectively, through integrating genome data with pedigree information. Family genome analysis, including determination of parental origin of the variants, detection of de novo mutations, identification of potential recombination events and identical-by-decent segments, etc., can be performed flexibly. Diverse annotations for the family genome variants, such as dbSNP memberships, linkage disequilibriums, genes, variant effects, potential phenotypes, etc., are illustrated as well. Moreover, the FGB can automatically search de novo mutations and compound heterozygous variants for a selected individual, and guide investigators to find high-risk genes with flexible navigation options. These features enable users to investigate and understand family genomes intuitively and systematically. The FGB is available at http://mlg.hit.edu.cn/FGB/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Reaction schemes visualized in network form: the syntheses of strychnine as an example.

    PubMed

    Proudfoot, John R

    2013-05-24

    Representation of synthesis sequences in a network form provides an effective method for the comparison of multiple reaction schemes and an opportunity to emphasize features such as reaction scale that are often relegated to experimental sections. An example of data formatting that allows construction of network maps in Cytoscape is presented, along with maps that illustrate the comparison of multiple reaction sequences, comparison of scaffold changes within sequences, and consolidation to highlight common key intermediates used across sequences. The 17 different synthetic routes reported for strychnine are used as an example basis set. The reaction maps presented required a significant data extraction and curation, and a standardized tabular format for reporting reaction information, if applied in a consistent way, could allow the automated combination of reaction information across different sources.

  18. CisSERS: Customizable in silico sequence evaluation for restriction sites

    DOE PAGES

    Sharpe, Richard M.; Koepke, Tyson; Harper, Artemus; ...

    2016-04-12

    High-throughput sequencing continues to produce an immense volume of information that is processed and assembled into mature sequence data. Here, data analysis tools are urgently needed that leverage the embedded DNA sequence polymorphisms and consequent changes to restriction sites or sequence motifs in a high-throughput manner to enable biological experimentation. CisSERS was developed as a standalone open source tool to analyze sequence datasets and provide biologists with individual or comparative genome organization information in terms of presence and frequency of patterns or motifs such as restriction enzymes. Predicted agarose gel visualization of the custom analyses results was also integrated tomore » enhance the usefulness of the software. CisSERS offers several novel functionalities, such as handling of large and multiple datasets in parallel, multiple restriction enzyme site detection and custom motif detection features, which are seamlessly integrated with real time agarose gel visualization. Using a simple fasta-formatted file as input, CisSERS utilizes the REBASE enzyme database. Results from CisSERSenable the user to make decisions for designing genotyping by sequencing experiments, reduced representation sequencing, 3’UTR sequencing, and cleaved amplified polymorphic sequence (CAPS) molecular markers for large sample sets. CisSERS is a java based graphical user interface built around a perl backbone. Several of the applications of CisSERS including CAPS molecular marker development were successfully validated using wet-lab experimentation. Here, we present the tool CisSERSand results from in-silico and corresponding wet-lab analyses demonstrating that CisSERS is a technology platform solution that facilitates efficient data utilization in genomics and genetics studies.« less

  19. CisSERS: Customizable in silico sequence evaluation for restriction sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, Richard M.; Koepke, Tyson; Harper, Artemus

    High-throughput sequencing continues to produce an immense volume of information that is processed and assembled into mature sequence data. Here, data analysis tools are urgently needed that leverage the embedded DNA sequence polymorphisms and consequent changes to restriction sites or sequence motifs in a high-throughput manner to enable biological experimentation. CisSERS was developed as a standalone open source tool to analyze sequence datasets and provide biologists with individual or comparative genome organization information in terms of presence and frequency of patterns or motifs such as restriction enzymes. Predicted agarose gel visualization of the custom analyses results was also integrated tomore » enhance the usefulness of the software. CisSERS offers several novel functionalities, such as handling of large and multiple datasets in parallel, multiple restriction enzyme site detection and custom motif detection features, which are seamlessly integrated with real time agarose gel visualization. Using a simple fasta-formatted file as input, CisSERS utilizes the REBASE enzyme database. Results from CisSERSenable the user to make decisions for designing genotyping by sequencing experiments, reduced representation sequencing, 3’UTR sequencing, and cleaved amplified polymorphic sequence (CAPS) molecular markers for large sample sets. CisSERS is a java based graphical user interface built around a perl backbone. Several of the applications of CisSERS including CAPS molecular marker development were successfully validated using wet-lab experimentation. Here, we present the tool CisSERSand results from in-silico and corresponding wet-lab analyses demonstrating that CisSERS is a technology platform solution that facilitates efficient data utilization in genomics and genetics studies.« less

  20. Oligo kernels for datamining on biological sequences: a case study on prokaryotic translation initiation sites

    PubMed Central

    Meinicke, Peter; Tech, Maike; Morgenstern, Burkhard; Merkl, Rainer

    2004-01-01

    Background Kernel-based learning algorithms are among the most advanced machine learning methods and have been successfully applied to a variety of sequence classification tasks within the field of bioinformatics. Conventional kernels utilized so far do not provide an easy interpretation of the learnt representations in terms of positional and compositional variability of the underlying biological signals. Results We propose a kernel-based approach to datamining on biological sequences. With our method it is possible to model and analyze positional variability of oligomers of any length in a natural way. On one hand this is achieved by mapping the sequences to an intuitive but high-dimensional feature space, well-suited for interpretation of the learnt models. On the other hand, by means of the kernel trick we can provide a general learning algorithm for that high-dimensional representation because all required statistics can be computed without performing an explicit feature space mapping of the sequences. By introducing a kernel parameter that controls the degree of position-dependency, our feature space representation can be tailored to the characteristics of the biological problem at hand. A regularized learning scheme enables application even to biological problems for which only small sets of example sequences are available. Our approach includes a visualization method for transparent representation of characteristic sequence features. Thereby importance of features can be measured in terms of discriminative strength with respect to classification of the underlying sequences. To demonstrate and validate our concept on a biochemically well-defined case, we analyze E. coli translation initiation sites in order to show that we can find biologically relevant signals. For that case, our results clearly show that the Shine-Dalgarno sequence is the most important signal upstream a start codon. The variability in position and composition we found for that signal is in accordance with previous biological knowledge. We also find evidence for signals downstream of the start codon, previously introduced as transcriptional enhancers. These signals are mainly characterized by occurrences of adenine in a region of about 4 nucleotides next to the start codon. Conclusions We showed that the oligo kernel can provide a valuable tool for the analysis of relevant signals in biological sequences. In the case of translation initiation sites we could clearly deduce the most discriminative motifs and their positional variation from example sequences. Attractive features of our approach are its flexibility with respect to oligomer length and position conservation. By means of these two parameters oligo kernels can easily be adapted to different biological problems. PMID:15511290

  1. Genome U-Plot: a whole genome visualization.

    PubMed

    Gaitatzes, Athanasios; Johnson, Sarah H; Smadbeck, James B; Vasmatzis, George

    2018-05-15

    The ability to produce and analyze whole genome sequencing (WGS) data from samples with structural variations (SV) generated the need to visualize such abnormalities in simplified plots. Conventional two-dimensional representations of WGS data frequently use either circular or linear layouts. There are several diverse advantages regarding both these representations, but their major disadvantage is that they do not use the two-dimensional space very efficiently. We propose a layout, termed the Genome U-Plot, which spreads the chromosomes on a two-dimensional surface and essentially quadruples the spatial resolution. We present the Genome U-Plot for producing clear and intuitive graphs that allows researchers to generate novel insights and hypotheses by visualizing SVs such as deletions, amplifications, and chromoanagenesis events. The main features of the Genome U-Plot are its layered layout, its high spatial resolution and its improved aesthetic qualities. We compare conventional visualization schemas with the Genome U-Plot using visualization metrics such as number of line crossings and crossing angle resolution measures. Based on our metrics, we improve the readability of the resulting graph by at least 2-fold, making apparent important features and making it easy to identify important genomic changes. A whole genome visualization tool with high spatial resolution and improved aesthetic qualities. An implementation and documentation of the Genome U-Plot is publicly available at https://github.com/gaitat/GenomeUPlot. vasmatzis.george@mayo.edu. Supplementary data are available at Bioinformatics online.

  2. Adaptive low-rank subspace learning with online optimization for robust visual tracking.

    PubMed

    Liu, Risheng; Wang, Di; Han, Yuzhuo; Fan, Xin; Luo, Zhongxuan

    2017-04-01

    In recent years, sparse and low-rank models have been widely used to formulate appearance subspace for visual tracking. However, most existing methods only consider the sparsity or low-rankness of the coefficients, which is not sufficient enough for appearance subspace learning on complex video sequences. Moreover, as both the low-rank and the column sparse measures are tightly related to all the samples in the sequences, it is challenging to incrementally solve optimization problems with both nuclear norm and column sparse norm on sequentially obtained video data. To address above limitations, this paper develops a novel low-rank subspace learning with adaptive penalization (LSAP) framework for subspace based robust visual tracking. Different from previous work, which often simply decomposes observations as low-rank features and sparse errors, LSAP simultaneously learns the subspace basis, low-rank coefficients and column sparse errors to formulate appearance subspace. Within LSAP framework, we introduce a Hadamard production based regularization to incorporate rich generative/discriminative structure constraints to adaptively penalize the coefficients for subspace learning. It is shown that such adaptive penalization can significantly improve the robustness of LSAP on severely corrupted dataset. To utilize LSAP for online visual tracking, we also develop an efficient incremental optimization scheme for nuclear norm and column sparse norm minimizations. Experiments on 50 challenging video sequences demonstrate that our tracker outperforms other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Self-organizing neural integration of pose-motion features for human action recognition

    PubMed Central

    Parisi, German I.; Weber, Cornelius; Wermter, Stefan

    2015-01-01

    The visual recognition of complex, articulated human movements is fundamental for a wide range of artificial systems oriented toward human-robot communication, action classification, and action-driven perception. These challenging tasks may generally involve the processing of a huge amount of visual information and learning-based mechanisms for generalizing a set of training actions and classifying new samples. To operate in natural environments, a crucial property is the efficient and robust recognition of actions, also under noisy conditions caused by, for instance, systematic sensor errors and temporarily occluded persons. Studies of the mammalian visual system and its outperforming ability to process biological motion information suggest separate neural pathways for the distinct processing of pose and motion features at multiple levels and the subsequent integration of these visual cues for action perception. We present a neurobiologically-motivated approach to achieve noise-tolerant action recognition in real time. Our model consists of self-organizing Growing When Required (GWR) networks that obtain progressively generalized representations of sensory inputs and learn inherent spatio-temporal dependencies. During the training, the GWR networks dynamically change their topological structure to better match the input space. We first extract pose and motion features from video sequences and then cluster actions in terms of prototypical pose-motion trajectories. Multi-cue trajectories from matching action frames are subsequently combined to provide action dynamics in the joint feature space. Reported experiments show that our approach outperforms previous results on a dataset of full-body actions captured with a depth sensor, and ranks among the best results for a public benchmark of domestic daily actions. PMID:26106323

  4. A survey of tools for variant analysis of next-generation genome sequencing data

    PubMed Central

    Pabinger, Stephan; Dander, Andreas; Fischer, Maria; Snajder, Rene; Sperk, Michael; Efremova, Mirjana; Krabichler, Birgit; Speicher, Michael R.; Zschocke, Johannes

    2014-01-01

    Recent advances in genome sequencing technologies provide unprecedented opportunities to characterize individual genomic landscapes and identify mutations relevant for diagnosis and therapy. Specifically, whole-exome sequencing using next-generation sequencing (NGS) technologies is gaining popularity in the human genetics community due to the moderate costs, manageable data amounts and straightforward interpretation of analysis results. While whole-exome and, in the near future, whole-genome sequencing are becoming commodities, data analysis still poses significant challenges and led to the development of a plethora of tools supporting specific parts of the analysis workflow or providing a complete solution. Here, we surveyed 205 tools for whole-genome/whole-exome sequencing data analysis supporting five distinct analytical steps: quality assessment, alignment, variant identification, variant annotation and visualization. We report an overview of the functionality, features and specific requirements of the individual tools. We then selected 32 programs for variant identification, variant annotation and visualization, which were subjected to hands-on evaluation using four data sets: one set of exome data from two patients with a rare disease for testing identification of germline mutations, two cancer data sets for testing variant callers for somatic mutations, copy number variations and structural variations, and one semi-synthetic data set for testing identification of copy number variations. Our comprehensive survey and evaluation of NGS tools provides a valuable guideline for human geneticists working on Mendelian disorders, complex diseases and cancers. PMID:23341494

  5. RNA-TVcurve: a Web server for RNA secondary structure comparison based on a multi-scale similarity of its triple vector curve representation.

    PubMed

    Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin

    2017-01-21

    RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA comparison tools, RNApdist and RNAdistance, showcased that RNA-TVcurve can efficiently capture subtle relationships among RNAs for mutation detection and non-coding RNA classification. All the relevant results were shown in an intuitive graphical manner, and can be freely downloaded from this server. RNA-TVcurve, along with test examples and detailed documents, are available at: http://ml.jlu.edu.cn/tvcurve/ .

  6. The Effect of the Number and Nature of Features and of General Ability on the Simultaneous and Successive Processing of Maps.

    ERIC Educational Resources Information Center

    Sutherland, Sandra; Winn, William

    The interactions of three factors that may be involved with the memory for pattern or sequence in visual materials were investigated in this study: (1) arbitrariness of representation; (2) task; and (3) ability of students. The subjects, who were 29 graduate students in education, were pretested for general ability and randomly assigned to four…

  7. Using Highlighting to Train Attentional Expertise

    PubMed Central

    Roads, Brett; Mozer, Michael C.; Busey, Thomas A.

    2016-01-01

    Acquiring expertise in complex visual tasks is time consuming. To facilitate the efficient training of novices on where to look in these tasks, we propose an attentional highlighting paradigm. Highlighting involves dynamically modulating the saliency of a visual image to guide attention along the fixation path of a domain expert who had previously viewed the same image. In Experiment 1, we trained naive subjects via attentional highlighting on a fingerprint-matching task. Before and after training, we asked subjects to freely inspect images containing pairs of prints and determine whether the prints matched. Fixation sequences were automatically scored for the degree of expertise exhibited using a Bayesian discriminative model of novice and expert gaze behavior. Highlighted training causes gaze behavior to become more expert-like not only on the trained images but also on transfer images, indicating generalization of learning. In Experiment 2, to control for the possibility that the increase in expertise is due to mere exposure, we trained subjects via highlighting of fixation sequences from novices, not experts, and observed no transition toward expertise. In Experiment 3, to determine the specificity of the training effect, we trained subjects with expert fixation sequences from images other than the one being viewed, which preserves coarse-scale statistics of expert gaze but provides no information about fine-grain features. Observing at least a partial transition toward expertise, we obtain only weak evidence that the highlighting procedure facilitates the learning of critical local features. We discuss possible improvements to the highlighting procedure. PMID:26744839

  8. Using Highlighting to Train Attentional Expertise.

    PubMed

    Roads, Brett; Mozer, Michael C; Busey, Thomas A

    2016-01-01

    Acquiring expertise in complex visual tasks is time consuming. To facilitate the efficient training of novices on where to look in these tasks, we propose an attentional highlighting paradigm. Highlighting involves dynamically modulating the saliency of a visual image to guide attention along the fixation path of a domain expert who had previously viewed the same image. In Experiment 1, we trained naive subjects via attentional highlighting on a fingerprint-matching task. Before and after training, we asked subjects to freely inspect images containing pairs of prints and determine whether the prints matched. Fixation sequences were automatically scored for the degree of expertise exhibited using a Bayesian discriminative model of novice and expert gaze behavior. Highlighted training causes gaze behavior to become more expert-like not only on the trained images but also on transfer images, indicating generalization of learning. In Experiment 2, to control for the possibility that the increase in expertise is due to mere exposure, we trained subjects via highlighting of fixation sequences from novices, not experts, and observed no transition toward expertise. In Experiment 3, to determine the specificity of the training effect, we trained subjects with expert fixation sequences from images other than the one being viewed, which preserves coarse-scale statistics of expert gaze but provides no information about fine-grain features. Observing at least a partial transition toward expertise, we obtain only weak evidence that the highlighting procedure facilitates the learning of critical local features. We discuss possible improvements to the highlighting procedure.

  9. iCopyDAV: Integrated platform for copy number variations—Detection, annotation and visualization

    PubMed Central

    Vogeti, Sriharsha

    2018-01-01

    Discovery of copy number variations (CNVs), a major category of structural variations, have dramatically changed our understanding of differences between individuals and provide an alternate paradigm for the genetic basis of human diseases. CNVs include both copy gain and copy loss events and their detection genome-wide is now possible using high-throughput, low-cost next generation sequencing (NGS) methods. However, accurate detection of CNVs from NGS data is not straightforward due to non-uniform coverage of reads resulting from various systemic biases. We have developed an integrated platform, iCopyDAV, to handle some of these issues in CNV detection in whole genome NGS data. It has a modular framework comprising five major modules: data pre-treatment, segmentation, variant calling, annotation and visualization. An important feature of iCopyDAV is the functional annotation module that enables the user to identify and prioritize CNVs encompassing various functional elements, genomic features and disease-associations. Parallelization of the segmentation algorithms makes the iCopyDAV platform even accessible on a desktop. Here we show the effect of sequencing coverage, read length, bin size, data pre-treatment and segmentation approaches on accurate detection of the complete spectrum of CNVs. Performance of iCopyDAV is evaluated on both simulated data and real data for different sequencing depths. It is an open-source integrated pipeline available at https://github.com/vogetihrsh/icopydav and as Docker’s image at http://bioinf.iiit.ac.in/icopydav/. PMID:29621297

  10. FMRI investigation of cross-modal interactions in beat perception: Audition primes vision, but not vice versa

    PubMed Central

    Grahn, Jessica A.; Henry, Molly J.; McAuley, J. Devin

    2011-01-01

    How we measure time and integrate temporal cues from different sensory modalities are fundamental questions in neuroscience. Sensitivity to a “beat” (such as that routinely perceived in music) differs substantially between auditory and visual modalities. Here we examined beat sensitivity in each modality, and examined cross-modal influences, using functional magnetic resonance imaging (fMRI) to characterize brain activity during perception of auditory and visual rhythms. In separate fMRI sessions, participants listened to auditory sequences or watched visual sequences. The order of auditory and visual sequence presentation was counterbalanced so that cross-modal order effects could be investigated. Participants judged whether sequences were speeding up or slowing down, and the pattern of tempo judgments was used to derive a measure of sensitivity to an implied beat. As expected, participants were less sensitive to an implied beat in visual sequences than in auditory sequences. However, visual sequences produced a stronger sense of beat when preceded by auditory sequences with identical temporal structure. Moreover, increases in brain activity were observed in the bilateral putamen for visual sequences preceded by auditory sequences when compared to visual sequences without prior auditory exposure. No such order-dependent differences (behavioral or neural) were found for the auditory sequences. The results provide further evidence for the role of the basal ganglia in internal generation of the beat and suggest that an internal auditory rhythm representation may be activated during visual rhythm perception. PMID:20858544

  11. Experience improves feature extraction in Drosophila.

    PubMed

    Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike

    2007-05-09

    Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.

  12. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    PubMed

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Visual display aid for orbital maneuvering - Design considerations

    NASA Technical Reports Server (NTRS)

    Grunwald, Arthur J.; Ellis, Stephen R.

    1993-01-01

    This paper describes the development of an interactive proximity operations planning system that allows on-site planning of fuel-efficient multiburn maneuvers in a potential multispacecraft environment. Although this display system most directly assists planning by providing visual feedback to aid visualization of the trajectories and constraints, its most significant features include: (1) the use of an 'inverse dynamics' algorithm that removes control nonlinearities facing the operator, and (2) a trajectory planning technique that separates, through a 'geometric spreadsheet', the normally coupled complex problems of planning orbital maneuvers and allows solution by an iterative sequence of simple independent actions. The visual feedback of trajectory shapes and operational constraints, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool provides an example of operator-assisted optimization of nonlinear cost functions.

  14. Robust feature tracking for endoscopic pose estimation and structure recovery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Krappe, S.; Röhl, S.; Bodenstedt, S.; Müller-Stich, B.; Dillmann, R.

    2013-03-01

    Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.

  15. R-chie: a web server and R package for visualizing RNA secondary structures

    PubMed Central

    Lai, Daniel; Proctor, Jeff R.; Zhu, Jing Yun A.; Meyer, Irmtraud M.

    2012-01-01

    Visually examining RNA structures can greatly aid in understanding their potential functional roles and in evaluating the performance of structure prediction algorithms. As many functional roles of RNA structures can already be studied given the secondary structure of the RNA, various methods have been devised for visualizing RNA secondary structures. Most of these methods depict a given RNA secondary structure as a planar graph consisting of base-paired stems interconnected by roundish loops. In this article, we present an alternative method of depicting RNA secondary structure as arc diagrams. This is well suited for structures that are difficult or impossible to represent as planar stem-loop diagrams. Arc diagrams can intuitively display pseudo-knotted structures, as well as transient and alternative structural features. In addition, they facilitate the comparison of known and predicted RNA secondary structures. An added benefit is that structure information can be displayed in conjunction with a corresponding multiple sequence alignments, thereby highlighting structure and primary sequence conservation and variation. We have implemented the visualization algorithm as a web server R-chie as well as a corresponding R package called R4RNA, which allows users to run the software locally and across a range of common operating systems. PMID:22434875

  16. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  17. pplacer: linear time maximum-likelihood and Bayesian phylogenetic placement of sequences onto a fixed reference tree

    PubMed Central

    2010-01-01

    Background Likelihood-based phylogenetic inference is generally considered to be the most reliable classification method for unknown sequences. However, traditional likelihood-based phylogenetic methods cannot be applied to large volumes of short reads from next-generation sequencing due to computational complexity issues and lack of phylogenetic signal. "Phylogenetic placement," where a reference tree is fixed and the unknown query sequences are placed onto the tree via a reference alignment, is a way to bring the inferential power offered by likelihood-based approaches to large data sets. Results This paper introduces pplacer, a software package for phylogenetic placement and subsequent visualization. The algorithm can place twenty thousand short reads on a reference tree of one thousand taxa per hour per processor, has essentially linear time and memory complexity in the number of reference taxa, and is easy to run in parallel. Pplacer features calculation of the posterior probability of a placement on an edge, which is a statistically rigorous way of quantifying uncertainty on an edge-by-edge basis. It also can inform the user of the positional uncertainty for query sequences by calculating expected distance between placement locations, which is crucial in the estimation of uncertainty with a well-sampled reference tree. The software provides visualizations using branch thickness and color to represent number of placements and their uncertainty. A simulation study using reads generated from 631 COG alignments shows a high level of accuracy for phylogenetic placement over a wide range of alignment diversity, and the power of edge uncertainty estimates to measure placement confidence. Conclusions Pplacer enables efficient phylogenetic placement and subsequent visualization, making likelihood-based phylogenetics methodology practical for large collections of reads; it is freely available as source code, binaries, and a web service. PMID:21034504

  18. Learning Spatio-Temporal Representations for Action Recognition: A Genetic Programming Approach.

    PubMed

    Liu, Li; Shao, Ling; Li, Xuelong; Lu, Ke

    2016-01-01

    Extracting discriminative and robust features from video sequences is the first and most critical step in human action recognition. In this paper, instead of using handcrafted features, we automatically learn spatio-temporal motion features for action recognition. This is achieved via an evolutionary method, i.e., genetic programming (GP), which evolves the motion feature descriptor on a population of primitive 3D operators (e.g., 3D-Gabor and wavelet). In this way, the scale and shift invariant features can be effectively extracted from both color and optical flow sequences. We intend to learn data adaptive descriptors for different datasets with multiple layers, which makes fully use of the knowledge to mimic the physical structure of the human visual cortex for action recognition and simultaneously reduce the GP searching space to effectively accelerate the convergence of optimal solutions. In our evolutionary architecture, the average cross-validation classification error, which is calculated by an support-vector-machine classifier on the training set, is adopted as the evaluation criterion for the GP fitness function. After the entire evolution procedure finishes, the best-so-far solution selected by GP is regarded as the (near-)optimal action descriptor obtained. The GP-evolving feature extraction method is evaluated on four popular action datasets, namely KTH, HMDB51, UCF YouTube, and Hollywood2. Experimental results show that our method significantly outperforms other types of features, either hand-designed or machine-learned.

  19. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  20. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    PubMed

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  2. The Papillomavirus Episteme: a major update to the papillomavirus sequence database.

    PubMed

    Van Doorslaer, Koenraad; Li, Zhiwen; Xirasagar, Sandhya; Maes, Piet; Kaminsky, David; Liou, David; Sun, Qiang; Kaur, Ramandeep; Huyen, Yentram; McBride, Alison A

    2017-01-04

    The Papillomavirus Episteme (PaVE) is a database of curated papillomavirus genomic sequences, accompanied by web-based sequence analysis tools. This update describes the addition of major new features. The papillomavirus genomes within PaVE have been further annotated, and now includes the major spliced mRNA transcripts. Viral genes and transcripts can be visualized on both linear and circular genome browsers. Evolutionary relationships among PaVE reference protein sequences can be analysed using multiple sequence alignments and phylogenetic trees. To assist in viral discovery, PaVE offers a typing tool; a simplified algorithm to determine whether a newly sequenced virus is novel. PaVE also now contains an image library containing gross clinical and histopathological images of papillomavirus infected lesions. Database URL: https://pave.niaid.nih.gov/. Published by Oxford University Press on behalf of Nucleic Acids Research 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  3. DNATagger, colors for codons.

    PubMed

    Scherer, N M; Basso, D M

    2008-09-16

    DNATagger is a web-based tool for coloring and editing DNA, RNA and protein sequences and alignments. It is dedicated to the visualization of protein coding sequences and also protein sequence alignments to facilitate the comprehension of evolutionary processes in sequence analysis. The distinctive feature of DNATagger is the use of codons as informative units for coloring DNA and RNA sequences. The codons are colored according to their corresponding amino acids. It is the first program that colors codons in DNA sequences without being affected by "out-of-frame" gaps of alignments. It can handle single gaps and gaps inside the triplets. The program also provides the possibility to edit the alignments and change color patterns and translation tables. DNATagger is a JavaScript application, following the W3C guidelines, designed to work on standards-compliant web browsers. It therefore requires no installation and is platform independent. The web-based DNATagger is available as free and open source software at http://www.inf.ufrgs.br/~dmbasso/dnatagger/.

  4. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  5. Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis.

    PubMed

    Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A

    2012-08-01

    Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.

  6. Determinants of Global Color-Based Selection in Human Visual Cortex.

    PubMed

    Bartsch, Mandy V; Boehler, Carsten N; Stoppel, Christian M; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Hopf, Jens-Max

    2015-09-01

    Feature attention operates in a spatially global way, with attended feature values being prioritized for selection outside the focus of attention. Accounts of global feature attention have emphasized feature competition as a determining factor. Here, we use magnetoencephalographic recordings in humans to test whether competition is critical for global feature selection to arise. Subjects performed a color/shape discrimination task in one visual field (VF), while irrelevant color probes were presented in the other unattended VF. Global effects of color attention were assessed by analyzing the response to the probe as a function of whether or not the probe's color was a target-defining color. We find that global color selection involves a sequence of modulations in extrastriate cortex, with an initial phase in higher tier areas (lateral occipital complex) followed by a later phase in lower tier retinotopic areas (V3/V4). Importantly, these modulations appeared with and without color competition in the focus of attention. Moreover, early parts of the modulation emerged for a task-relevant color not even present in the focus of attention. All modulations, however, were eliminated during simple onset-detection of the colored target. These results indicate that global color-based attention depends on target discrimination independent of feature competition in the focus of attention. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. GBParsy: a GenBank flatfile parser library with high speed.

    PubMed

    Lee, Tae-Ho; Kim, Yeon-Ki; Nahm, Baek Hie

    2008-07-25

    GenBank flatfile (GBF) format is one of the most popular sequence file formats because of its detailed sequence features and ease of readability. To use the data in the file by a computer, a parsing process is required and is performed according to a given grammar for the sequence and the description in a GBF. Currently, several parser libraries for the GBF have been developed. However, with the accumulation of DNA sequence information from eukaryotic chromosomes, parsing a eukaryotic genome sequence with these libraries inevitably takes a long time, due to the large GBF file and its correspondingly large genomic nucleotide sequence and related feature information. Thus, there is significant need to develop a parsing program with high speed and efficient use of system memory. We developed a library, GBParsy, which was C language-based and parses GBF files. The parsing speed was maximized by using content-specified functions in place of regular expressions that are flexible but slow. In addition, we optimized an algorithm related to memory usage so that it also increased parsing performance and efficiency of memory usage. GBParsy is at least 5-100x faster than current parsers in benchmark tests. GBParsy is estimated to extract annotated information from almost 100 Mb of a GenBank flatfile for chromosomal sequence information within a second. Thus, it should be used for a variety of applications such as on-time visualization of a genome at a web site.

  8. Sequence Diversity Diagram for comparative analysis of multiple sequence alignments.

    PubMed

    Sakai, Ryo; Aerts, Jan

    2014-01-01

    The sequence logo is a graphical representation of a set of aligned sequences, commonly used to depict conservation of amino acid or nucleotide sequences. Although it effectively communicates the amount of information present at every position, this visual representation falls short when the domain task is to compare between two or more sets of aligned sequences. We present a new visual presentation called a Sequence Diversity Diagram and validate our design choices with a case study. Our software was developed using the open-source program called Processing. It loads multiple sequence alignment FASTA files and a configuration file, which can be modified as needed to change the visualization. The redesigned figure improves on the visual comparison of two or more sets, and it additionally encodes information on sequential position conservation. In our case study of the adenylate kinase lid domain, the Sequence Diversity Diagram reveals unexpected patterns and new insights, for example the identification of subgroups within the protein subfamily. Our future work will integrate this visual encoding into interactive visualization tools to support higher level data exploration tasks.

  9. A novel visual saliency detection method for infrared video sequences

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Zhang, Yuzhen; Ning, Chen

    2017-12-01

    Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.

  10. Computer aided detection of tumor and edema in brain FLAIR magnetic resonance image using ANN

    NASA Astrophysics Data System (ADS)

    Pradhan, Nandita; Sinha, A. K.

    2008-03-01

    This paper presents an efficient region based segmentation technique for detecting pathological tissues (Tumor & Edema) of brain using fluid attenuated inversion recovery (FLAIR) magnetic resonance (MR) images. This work segments FLAIR brain images for normal and pathological tissues based on statistical features and wavelet transform coefficients using k-means algorithm. The image is divided into small blocks of 4×4 pixels. The k-means algorithm is used to cluster the image based on the feature vectors of blocks forming different classes representing different regions in the whole image. With the knowledge of the feature vectors of different segmented regions, supervised technique is used to train Artificial Neural Network using fuzzy back propagation algorithm (FBPA). Segmentation for detecting healthy tissues and tumors has been reported by several researchers by using conventional MRI sequences like T1, T2 and PD weighted sequences. This work successfully presents segmentation of healthy and pathological tissues (both Tumors and Edema) using FLAIR images. At the end pseudo coloring of segmented and classified regions are done for better human visualization.

  11. Change Detection in Uav Video Mosaics Combining a Feature Based Approach and Extended Image Differencing

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2016-06-01

    Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.

  12. TEA: the epigenome platform for Arabidopsis methylome study.

    PubMed

    Su, Sheng-Yao; Chen, Shu-Hwa; Lu, I-Hsuan; Chiang, Yih-Shien; Wang, Yu-Bin; Chen, Pao-Yang; Lin, Chung-Yen

    2016-12-22

    Bisulfite sequencing (BS-seq) has become a standard technology to profile genome-wide DNA methylation at single-base resolution. It allows researchers to conduct genome-wise cytosine methylation analyses on issues about genomic imprinting, transcriptional regulation, cellular development and differentiation. One single data from a BS-Seq experiment is resolved into many features according to the sequence contexts, making methylome data analysis and data visualization a complex task. We developed a streamlined platform, TEA, for analyzing and visualizing data from whole-genome BS-Seq (WGBS) experiments conducted in the model plant Arabidopsis thaliana. To capture the essence of the genome methylation level and to meet the efficiency for running online, we introduce a straightforward method for measuring genome methylation in each sequence context by gene. The method is scripted in Java to process BS-Seq mapping results. Through a simple data uploading process, the TEA server deploys a web-based platform for deep analysis by linking data to an updated Arabidopsis annotation database and toolkits. TEA is an intuitive and efficient online platform for analyzing the Arabidopsis genomic DNA methylation landscape. It provides several ways to help users exploit WGBS data. TEA is freely accessible for academic users at: http://tea.iis.sinica.edu.tw .

  13. Noise Gating Solar Images

    NASA Astrophysics Data System (ADS)

    DeForest, Craig; Seaton, Daniel B.; Darnell, John A.

    2017-08-01

    I present and demonstrate a new, general purpose post-processing technique, "3D noise gating", that can reduce image noise by an order of magnitude or more without effective loss of spatial or temporal resolution in typical solar applications.Nearly all scientific images are, ultimately, limited by noise. Noise can be direct Poisson "shot noise" from photon counting effects, or introduced by other means such as detector read noise. Noise is typically represented as a random variable (perhaps with location- or image-dependent characteristics) that is sampled once per pixel or once per resolution element of an image sequence. Noise limits many aspects of image analysis, including photometry, spatiotemporal resolution, feature identification, morphology extraction, and background modeling and separation.Identifying and separating noise from image signal is difficult. The common practice of blurring in space and/or time works because most image "signal" is concentrated in the low Fourier components of an image, while noise is evenly distributed. Blurring in space and/or time attenuates the high spatial and temporal frequencies, reducing noise at the expense of also attenuating image detail. Noise-gating exploits the same property -- "coherence" -- that we use to identify features in images, to separate image features from noise.Processing image sequences through 3-D noise gating results in spectacular (more than 10x) improvements in signal-to-noise ratio, while not blurring bright, resolved features in either space or time. This improves most types of image analysis, including feature identification, time sequence extraction, absolute and relative photometry (including differential emission measure analysis), feature tracking, computer vision, correlation tracking, background modeling, cross-scale analysis, visual display/presentation, and image compression.I will introduce noise gating, describe the method, and show examples from several instruments (including SDO/AIA , SDO/HMI, STEREO/SECCHI, and GOES-R/SUVI) that explore the benefits and limits of the technique.

  14. Neurons in the monkey amygdala detect eye-contact during naturalistic social interactions

    PubMed Central

    Mosher, Clayton P.; Zimmerman, Prisca E.; Gothard, Katalin M.

    2014-01-01

    Summary Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea while fixations stabilize the image [1]. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others [2]. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status [3-6]. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations at the eyes of others and to eye contact. These “eye cells” share several features with the canonical, visually responsive neurons in the monkey amygdala, however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade, or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye-movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. PMID:25283782

  15. Neurons in the monkey amygdala detect eye contact during naturalistic social interactions.

    PubMed

    Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M

    2014-10-20

    Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea, while fixations stabilize the image. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations on the eyes of others and to eye contact. These "eye cells" share several features with the canonical, visually responsive neurons in the monkey amygdala; however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Architecture and emplacement of flood basalt flow fields: case studies from the Columbia River Basalt Group, NW USA

    NASA Astrophysics Data System (ADS)

    Vye-Brown, C.; Self, S.; Barry, T. L.

    2013-03-01

    The physical features and morphologies of collections of lava bodies emplaced during single eruptions (known as flow fields) can be used to understand flood basalt emplacement mechanisms. Characteristics and internal features of lava lobes and whole flow field morphologies result from the forward propagation, radial spread, and cooling of individual lobes and are used as a tool to understand the architecture of extensive flood basalt lavas. The features of three flood basalt flow fields from the Columbia River Basalt Group are presented, including the Palouse Falls flow field, a small (8,890 km2, ˜190 km3) unit by common flood basalt proportions, and visualized in three dimensions. The architecture of the Palouse Falls flow field is compared to the complex Ginkgo and more extensive Sand Hollow flow fields to investigate the degree to which simple emplacement models represent the style, as well as the spatial and temporal developments, of flow fields. Evidence from each flow field supports emplacement by inflation as the predominant mechanism producing thick lobes. Inflation enables existing lobes to transmit lava to form new lobes, thus extending the advance and spread of lava flow fields. Minimum emplacement timescales calculated for each flow field are 19.3 years for Palouse Falls, 8.3 years for Ginkgo, and 16.9 years for Sand Hollow. Simple flow fields can be traced from vent to distal areas and an emplacement sequence visualized, but those with multiple-layered lobes present a degree of complexity that make lava pathways and emplacement sequences more difficult to identify.

  17. Robotic Vision-Based Localization in an Urban Environment

    NASA Technical Reports Server (NTRS)

    Mchenry, Michael; Cheng, Yang; Matthies

    2007-01-01

    A system of electronic hardware and software, now undergoing development, automatically estimates the location of a robotic land vehicle in an urban environment using a somewhat imprecise map, which has been generated in advance from aerial imagery. This system does not utilize the Global Positioning System and does not include any odometry, inertial measurement units, or any other sensors except a stereoscopic pair of black-and-white digital video cameras mounted on the vehicle. Of course, the system also includes a computer running software that processes the video image data. The software consists mostly of three components corresponding to the three major image-data-processing functions: Visual Odometry This component automatically tracks point features in the imagery and computes the relative motion of the cameras between sequential image frames. This component incorporates a modified version of a visual-odometry algorithm originally published in 1989. The algorithm selects point features, performs multiresolution area-correlation computations to match the features in stereoscopic images, tracks the features through the sequence of images, and uses the tracking results to estimate the six-degree-of-freedom motion of the camera between consecutive stereoscopic pairs of images (see figure). Urban Feature Detection and Ranging Using the same data as those processed by the visual-odometry component, this component strives to determine the three-dimensional (3D) coordinates of vertical and horizontal lines that are likely to be parts of, or close to, the exterior surfaces of buildings. The basic sequence of processes performed by this component is the following: 1. An edge-detection algorithm is applied, yielding a set of linked lists of edge pixels, a horizontal-gradient image, and a vertical-gradient image. 2. Straight-line segments of edges are extracted from the linked lists generated in step 1. Any straight-line segments longer than an arbitrary threshold (e.g., 30 pixels) are assumed to belong to buildings or other artificial objects. 3. A gradient-filter algorithm is used to test straight-line segments longer than the threshold to determine whether they represent edges of natural or artificial objects. In somewhat oversimplified terms, the test is based on the assumption that the gradient of image intensity varies little along a segment that represents the edge of an artificial object.

  18. Learning of goal-relevant and -irrelevant complex visual sequences in human V1.

    PubMed

    Rosenthal, Clive R; Mallik, Indira; Caballero-Gaudes, Cesar; Sereno, Martin I; Soto, David

    2018-06-12

    Learning and memory are supported by a network involving the medial temporal lobe and linked neocortical regions. Emerging evidence indicates that primary visual cortex (i.e., V1) may contribute to recognition memory, but this has been tested only with a single visuospatial sequence as the target memorandum. The present study used functional magnetic resonance imaging to investigate whether human V1 can support the learning of multiple, concurrent complex visual sequences involving discontinous (second-order) associations. Two peripheral, goal-irrelevant but structured sequences of orientated gratings appeared simultaneously in fixed locations of the right and left visual fields alongside a central, goal-relevant sequence that was in the focus of spatial attention. Pseudorandom sequences were introduced at multiple intervals during the presentation of the three structured visual sequences to provide an online measure of sequence-specific knowledge at each retinotopic location. We found that a network involving the precuneus and V1 was involved in learning the structured sequence presented at central fixation, whereas right V1 was modulated by repeated exposure to the concurrent structured sequence presented in the left visual field. The same result was not found in left V1. These results indicate for the first time that human V1 can support the learning of multiple concurrent sequences involving complex discontinuous inter-item associations, even peripheral sequences that are goal-irrelevant. Copyright © 2018. Published by Elsevier Inc.

  19. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  20. Selective enhancement of orientation tuning before saccades.

    PubMed

    Ohl, Sven; Kuper, Clara; Rolfs, Martin

    2017-11-01

    Saccadic eye movements cause a rapid sweep of the visual image across the retina and bring the saccade's target into high-acuity foveal vision. Even before saccade onset, visual processing is selectively prioritized at the saccade target. To determine how this presaccadic attention shift exerts its influence on visual selection, we compare the dynamics of perceptual tuning curves before movement onset at the saccade target and in the opposite hemifield. Participants monitored a 30-Hz sequence of randomly oriented gratings for a target orientation. Combining a reverse correlation technique previously used to study orientation tuning in neurons and general additive mixed modeling, we found that perceptual reports were tuned to the target orientation. The gain of orientation tuning increased markedly within the last 100 ms before saccade onset. In addition, we observed finer orientation tuning right before saccade onset. This increase in gain and tuning occurred at the saccade target location and was not observed at the incongruent location in the opposite hemifield. The present findings suggest, therefore, that presaccadic attention exerts its influence on vision in a spatially and feature-selective manner, enhancing performance and sharpening feature tuning at the future gaze location before the eyes start moving.

  1. Learning Human Actions by Combining Global Dynamics and Local Appearance.

    PubMed

    Luo, Guan; Yang, Shuang; Tian, Guodong; Yuan, Chunfeng; Hu, Weiming; Maybank, Stephen J

    2014-12-01

    In this paper, we address the problem of human action recognition through combining global temporal dynamics and local visual spatio-temporal appearance features. For this purpose, in the global temporal dimension, we propose to model the motion dynamics with robust linear dynamical systems (LDSs) and use the model parameters as motion descriptors. Since LDSs live in a non-Euclidean space and the descriptors are in non-vector form, we propose a shift invariant subspace angles based distance to measure the similarity between LDSs. In the local visual dimension, we construct curved spatio-temporal cuboids along the trajectories of densely sampled feature points and describe them using histograms of oriented gradients (HOG). The distance between motion sequences is computed with the Chi-Squared histogram distance in the bag-of-words framework. Finally we perform classification using the maximum margin distance learning method by combining the global dynamic distances and the local visual distances. We evaluate our approach for action recognition on five short clips data sets, namely Weizmann, KTH, UCF sports, Hollywood2 and UCF50, as well as three long continuous data sets, namely VIRAT, ADL and CRIM13. We show competitive results as compared with current state-of-the-art methods.

  2. bpRNA: large-scale automated annotation and analysis of RNA secondary structure.

    PubMed

    Danaee, Padideh; Rouches, Mason; Wiley, Michelle; Deng, Dezhong; Huang, Liang; Hendrix, David

    2018-05-09

    While RNA secondary structure prediction from sequence data has made remarkable progress, there is a need for improved strategies for annotating the features of RNA secondary structures. Here, we present bpRNA, a novel annotation tool capable of parsing RNA structures, including complex pseudoknot-containing RNAs, to yield an objective, precise, compact, unambiguous, easily-interpretable description of all loops, stems, and pseudoknots, along with the positions, sequence, and flanking base pairs of each such structural feature. We also introduce several new informative representations of RNA structure types to improve structure visualization and interpretation. We have further used bpRNA to generate a web-accessible meta-database, 'bpRNA-1m', of over 100 000 single-molecule, known secondary structures; this is both more fully and accurately annotated and over 20-times larger than existing databases. We use a subset of the database with highly similar (≥90% identical) sequences filtered out to report on statistical trends in sequence, flanking base pairs, and length. Both the bpRNA method and the bpRNA-1m database will be valuable resources both for specific analysis of individual RNA molecules and large-scale analyses such as are useful for updating RNA energy parameters for computational thermodynamic predictions, improving machine learning models for structure prediction, and for benchmarking structure-prediction algorithms.

  3. SolEST database: a "one-stop shop" approach to the study of Solanaceae transcriptomes.

    PubMed

    D'Agostino, Nunzio; Traini, Alessandra; Frusciante, Luigi; Chiusano, Maria Luisa

    2009-11-30

    Since no genome sequences of solanaceous plants have yet been completed, expressed sequence tag (EST) collections represent a reliable tool for broad sampling of Solanaceae transcriptomes, an attractive route for understanding Solanaceae genome functionality and a powerful reference for the structural annotation of emerging Solanaceae genome sequences. We describe the SolEST database http://biosrv.cab.unina.it/solestdb which integrates different EST datasets from both cultivated and wild Solanaceae species and from two species of the genus Coffea. Background as well as processed data contained in the database, extensively linked to external related resources, represent an invaluable source of information for these plant families. Two novel features differentiate SolEST from other resources: i) the option of accessing and then visualizing Solanaceae EST/TC alignments along the emerging tomato and potato genome sequences; ii) the opportunity to compare different Solanaceae assemblies generated by diverse research groups in the attempt to address a common complaint in the SOL community. Different databases have been established worldwide for collecting Solanaceae ESTs and are related in concept, content and utility to the one presented herein. However, the SolEST database has several distinguishing features that make it appealing for the research community and facilitates a "one-stop shop" for the study of Solanaceae transcriptomes.

  4. MMDB: Entrez’s 3D-structure database

    PubMed Central

    Wang, Yanli; Anderson, John B.; Chen, Jie; Geer, Lewis Y.; He, Siqian; Hurwitz, David I.; Liebert, Cynthia A.; Madej, Thomas; Marchler, Gabriele H.; Marchler-Bauer, Aron; Panchenko, Anna R.; Shoemaker, Benjamin A.; Song, James S.; Thiessen, Paul A.; Yamashita, Roxanne A.; Bryant, Stephen H.

    2002-01-01

    Three-dimensional structures are now known within many protein families and it is quite likely, in searching a sequence database, that one will encounter a homolog with known structure. The goal of Entrez’s 3D-structure database is to make this information, and the functional annotation it can provide, easily accessible to molecular biologists. To this end Entrez’s search engine provides three powerful features. (i) Sequence and structure neighbors; one may select all sequences similar to one of interest, for example, and link to any known 3D structures. (ii) Links between databases; one may search by term matching in MEDLINE, for example, and link to 3D structures reported in these articles. (iii) Sequence and structure visualization; identifying a homolog with known structure, one may view molecular-graphic and alignment displays, to infer approximate 3D structure. In this article we focus on two features of Entrez’s Molecular Modeling Database (MMDB) not described previously: links from individual biopolymer chains within 3D structures to a systematic taxonomy of organisms represented in molecular databases, and links from individual chains (and compact 3D domains within them) to structure neighbors, other chains (and 3D domains) with similar 3D structure. MMDB may be accessed at http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=Structure. PMID:11752307

  5. Rtools: a web server for various secondary structural analyses on single RNA sequences.

    PubMed

    Hamada, Michiaki; Ono, Yukiteru; Kiryu, Hisanori; Sato, Kengo; Kato, Yuki; Fukunaga, Tsukasa; Mori, Ryota; Asai, Kiyoshi

    2016-07-08

    The secondary structures, as well as the nucleotide sequences, are the important features of RNA molecules to characterize their functions. According to the thermodynamic model, however, the probability of any secondary structure is very small. As a consequence, any tool to predict the secondary structures of RNAs has limited accuracy. On the other hand, there are a few tools to compensate the imperfect predictions by calculating and visualizing the secondary structural information from RNA sequences. It is desirable to obtain the rich information from those tools through a friendly interface. We implemented a web server of the tools to predict secondary structures and to calculate various structural features based on the energy models of secondary structures. By just giving an RNA sequence to the web server, the user can get the different types of solutions of the secondary structures, the marginal probabilities such as base-paring probabilities, loop probabilities and accessibilities of the local bases, the energy changes by arbitrary base mutations as well as the measures for validations of the predicted secondary structures. The web server is available at http://rtools.cbrc.jp, which integrates software tools, CentroidFold, CentroidHomfold, IPKnot, CapR, Raccess, Rchange and RintD. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Modeling Geometric-Temporal Context With Directional Pyramid Co-Occurrence for Action Recognition.

    PubMed

    Yuan, Chunfeng; Li, Xi; Hu, Weiming; Ling, Haibin; Maybank, Stephen J

    2014-02-01

    In this paper, we present a new geometric-temporal representation for visual action recognition based on local spatio-temporal features. First, we propose a modified covariance descriptor under the log-Euclidean Riemannian metric to represent the spatio-temporal cuboids detected in the video sequences. Compared with previously proposed covariance descriptors, our descriptor can be measured and clustered in Euclidian space. Second, to capture the geometric-temporal contextual information, we construct a directional pyramid co-occurrence matrix (DPCM) to describe the spatio-temporal distribution of the vector-quantized local feature descriptors extracted from a video. DPCM characterizes the co-occurrence statistics of local features as well as the spatio-temporal positional relationships among the concurrent features. These statistics provide strong descriptive power for action recognition. To use DPCM for action recognition, we propose a directional pyramid co-occurrence matching kernel to measure the similarity of videos. The proposed method achieves the state-of-the-art performance and improves on the recognition performance of the bag-of-visual-words (BOVWs) models by a large margin on six public data sets. For example, on the KTH data set, it achieves 98.78% accuracy while the BOVW approach only achieves 88.06%. On both Weizmann and UCF CIL data sets, the highest possible accuracy of 100% is achieved.

  7. Reefgenomics.Org - a repository for marine genomics data.

    PubMed

    Liew, Yi Jin; Aranda, Manuel; Voolstra, Christian R

    2016-01-01

    Over the last decade, technological advancements have substantially decreased the cost and time of obtaining large amounts of sequencing data. Paired with the exponentially increased computing power, individual labs are now able to sequence genomes or transcriptomes to investigate biological questions of interest. This has led to a significant increase in available sequence data. Although the bulk of data published in articles are stored in public sequence databases, very often, only raw sequencing data are available; miscellaneous data such as assembled transcriptomes, genome annotations etc. are not easily obtainable through the same means. Here, we introduce our website (http://reefgenomics.org) that aims to centralize genomic and transcriptomic data from marine organisms. Besides providing convenient means to download sequences, we provide (where applicable) a genome browser to explore available genomic features, and a BLAST interface to search through the hosted sequences. Through the interface, multiple datasets can be queried simultaneously, allowing for the retrieval of matching sequences from organisms of interest. The minimalistic, no-frills interface reduces visual clutter, making it convenient for end-users to search and explore processed sequence data. DATABASE URL: http://reefgenomics.org. © The Author(s) 2016. Published by Oxford University Press.

  8. Parallel perceptual enhancement and hierarchic relevance evaluation in an audio-visual conjunction task.

    PubMed

    Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E

    2008-10-21

    Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.

  9. SraTailor: graphical user interface software for processing and visualizing ChIP-seq data.

    PubMed

    Oki, Shinya; Maehara, Kazumitsu; Ohkawa, Yasuyuki; Meno, Chikara

    2014-12-01

    Raw data from ChIP-seq (chromatin immunoprecipitation combined with massively parallel DNA sequencing) experiments are deposited in public databases as SRAs (Sequence Read Archives) that are publically available to all researchers. However, to graphically visualize ChIP-seq data of interest, the corresponding SRAs must be downloaded and converted into BigWig format, a process that involves complicated command-line processing. This task requires users to possess skill with script languages and sequence data processing, a requirement that prevents a wide range of biologists from exploiting SRAs. To address these challenges, we developed SraTailor, a GUI (Graphical User Interface) software package that automatically converts an SRA into a BigWig-formatted file. Simplicity of use is one of the most notable features of SraTailor: entering an accession number of an SRA and clicking the mouse are the only steps required to obtain BigWig-formatted files and to graphically visualize the extents of reads at given loci. SraTailor is also able to make peak calls, generate files of other formats, process users' own data, and accept various command-line-like options. Therefore, this software makes ChIP-seq data fully exploitable by a wide range of biologists. SraTailor is freely available at http://www.devbio.med.kyushu-u.ac.jp/sra_tailor/, and runs on both Mac and Windows machines. © 2014 The Authors Genes to Cells © 2014 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.

  10. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    NASA Astrophysics Data System (ADS)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  11. Hexicon 2: Automated Processing of Hydrogen-Deuterium Exchange Mass Spectrometry Data with Improved Deuteration Distribution Estimation

    NASA Astrophysics Data System (ADS)

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L.; Hamprecht, Fred A.; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  12. Hexicon 2: automated processing of hydrogen-deuterium exchange mass spectrometry data with improved deuteration distribution estimation.

    PubMed

    Lindner, Robert; Lou, Xinghua; Reinstein, Jochen; Shoeman, Robert L; Hamprecht, Fred A; Winkler, Andreas

    2014-06-01

    Hydrogen-deuterium exchange (HDX) experiments analyzed by mass spectrometry (MS) provide information about the dynamics and the solvent accessibility of protein backbone amide hydrogen atoms. Continuous improvement of MS instrumentation has contributed to the increasing popularity of this method; however, comprehensive automated data analysis is only beginning to mature. We present Hexicon 2, an automated pipeline for data analysis and visualization based on the previously published program Hexicon (Lou et al. 2010). Hexicon 2 employs the sensitive NITPICK peak detection algorithm of its predecessor in a divide-and-conquer strategy and adds new features, such as chromatogram alignment and improved peptide sequence assignment. The unique feature of deuteration distribution estimation was retained in Hexicon 2 and improved using an iterative deconvolution algorithm that is robust even to noisy data. In addition, Hexicon 2 provides a data browser that facilitates quality control and provides convenient access to common data visualization tasks. Analysis of a benchmark dataset demonstrates superior performance of Hexicon 2 compared with its predecessor in terms of deuteration centroid recovery and deuteration distribution estimation. Hexicon 2 greatly reduces data analysis time compared with manual analysis, whereas the increased number of peptides provides redundant coverage of the entire protein sequence. Hexicon 2 is a standalone application available free of charge under http://hx2.mpimf-heidelberg.mpg.de.

  13. Holding a manual response sequence in memory can disrupt vocal responses that share semantic features with the manual response.

    PubMed

    Fournier, Lisa Renee; Wiediger, Matthew D; McMeans, Ryan; Mattson, Paul S; Kirkwood, Joy; Herzog, Theibot

    2010-07-01

    Holding an action plan in memory for later execution can delay execution of another action if the actions share a similar (compatible) feature. This compatibility interference (CI) occurs for actions that share the same response modality (e.g., manual response). We investigated whether CI can generalize to actions that utilize different response modalities (manual and vocal). In three experiments, participants planned and withheld a sequence of key-presses with the left- or right-hand based on the visual identity of the first stimulus, and then immediately executed a speeded, vocal response ('left' or 'right') to a second visual stimulus. The vocal response was based on discriminating stimulus color (Experiment 1), reading a written word (Experiment 2), or reporting the antonym of a written word (Experiment 3). Results showed that CI occurred when the manual response hand (e.g., left) was compatible with the identity of the vocal response (e.g., 'left') in Experiment 1 and 3, but not in Experiment 2. This suggests that partial overlap of semantic codes is sufficient to obtain CI unless the intervening action can be accessed automatically (Experiment 2). These findings are consistent with the code occupation hypothesis and the general framework of the theory of event coding (Behav Brain Sci 24:849-878, 2001a; Behav Brain Sci 24:910-937, 2001b).

  14. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  15. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  16. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    PubMed

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  17. BlockLogo: visualization of peptide and sequence motif conservation

    PubMed Central

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Simon, Christian; Sun, Jing; Schönbach, Christian; Reinherz, Ellis L.; Zhang, Guang Lan; Brusic, Vladimir

    2013-01-01

    BlockLogo is a web-server application for visualization of protein and nucleotide fragments, continuous protein sequence motifs, and discontinuous sequence motifs using calculation of block entropy from multiple sequence alignments. The user input consists of a multiple sequence alignment, selection of motif positions, type of sequence, and output format definition. The output has BlockLogo along with the sequence logo, and a table of motif frequencies. We deployed BlockLogo as an online application and have demonstrated its utility through examples that show visualization of T-cell epitopes and B-cell epitopes (both continuous and discontinuous). Our additional example shows a visualization and analysis of structural motifs that determine specificity of peptide binding to HLA-DR molecules. The BlockLogo server also employs selected experimentally validated prediction algorithms to enable on-the-fly prediction of MHC binding affinity to 15 common HLA class I and class II alleles as well as visual analysis of discontinuous epitopes from multiple sequence alignments. It enables the visualization and analysis of structural and functional motifs that are usually described as regular expressions. It provides a compact view of discontinuous motifs composed of distant positions within biological sequences. BlockLogo is available at: http://research4.dfci.harvard.edu/cvc/blocklogo/ and http://methilab.bu.edu/blocklogo/ PMID:24001880

  18. Mutations in C8ORF37 cause Bardet Biedl syndrome (BBS21)

    PubMed Central

    Heon, Elise; Kim, Gunhee; Qin, Sophie; Garrison, Janelle E.; Tavares, Erika; Vincent, Ajoy; Nuangchamnong, Nina; Scott, C. Anthony; Slusarski, Diane C.; Sheffield, Val C.

    2016-01-01

    Bardet Biedl syndrome (BBS) is a multisystem genetically heterogeneous ciliopathy that most commonly leads to obesity, photoreceptor degeneration, digit anomalies, genito-urinary abnormalities, as well as cognitive impairment with autism, among other features. Sequencing of a DNA sample from a 17-year-old female affected with BBS did not identify any mutation in the known BBS genes. Whole-genome sequencing identified a novel loss-of-function disease-causing homozygous mutation (K102*) in C8ORF37, a gene coding for a cilia protein. The proband was overweight (body mass index 29.1) with a slowly progressive rod-cone dystrophy, a mild learning difficulty, high myopia, three limb post-axial polydactyly, horseshoe kidney, abnormally positioned uterus and elevated liver enzymes. Mutations in C8ORF37 were previously associated with severe autosomal recessive retinal dystrophies (retinitis pigmentosa RP64 and cone-rod dystrophy CORD16) but not BBS. To elucidate the functional role of C8ORF37 in a vertebrate system, we performed gene knockdown in Danio rerio and assessed the cardinal features of BBS and visual function. Knockdown of c8orf37 resulted in impaired visual behavior and BBS-related phenotypes, specifically, defects in the formation of Kupffer’s vesicle and delays in retrograde transport. Specificity of these phenotypes to BBS knockdown was shown with rescue experiments. Over-expression of human missense mutations in zebrafish also resulted in impaired visual behavior and BBS-related phenotypes. This is the first functional validation and association of C8ORF37 mutations with the BBS phenotype, which identifies BBS21. The zebrafish studies hereby show that C8ORF37 variants underlie clinically diagnosed BBS-related phenotypes as well as isolated retinal degeneration. PMID:27008867

  19. Colorimetric Sensor for Label Free Detection of Porcine PCR Product (ID: 18)

    NASA Astrophysics Data System (ADS)

    Ali, M. E.; Hashim, U.; Bari, M. F.; Dhahi, Th. S.

    2011-05-01

    This report described the use of 40±5 nm in diameter citrate-coated gold nanoparticles (GNPs) as colorimetric sensor to visually detect the presence of a 17-base swine specific conserved sequence and nucleotide mismatch in the mixed PCR products of pig, deer and shad cytochrome b genes. The size of these PCR amplicons was 109 base-pair and was amplified with a pair of common primers. Colloidal GNPs changed color from pinkish- red to purple-gray in 2 mM PBS buffer by losing its characteristic surface plasmon resonance peak at 530 nm and gaining new features between 620 and 800 nm in the absorption spectrum indicating strong aggregation. The particles were stabilized against salt induced aggregation, retained spectral features and characteristic color upon adsorption of single-stranded DNA. The PCR products without any additional processing were hybridized with a 17-nucleotide swine probe prior to exposure to GNPs. At a critical annealing temperature (55° C) that differentiated between the match and mismatch pairing, the probe was hybridized with the pig PCR product and dehybridized from the deer's and shad's. The interaction of dehybridized probe to GNPs prevented them from salt-induced aggregation, retaining their characteristic red color. The assay did not need any surface modification chemistry or labeling steps. The results were determined visually and validated by absorption spectroscopy. The entire assay (hybridization plus visual detection) was performed in less than 10 min. The assay obviated the need of complex RFLP, sequencing or blotting to differentiate the same size PCR products. We find the application of the assay for species assignment in food analysis, mismatch detection in genetic screening and homology study among closely related species.

  20. A Sieving ANN for Emotion-Based Movie Clip Classification

    NASA Astrophysics Data System (ADS)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  1. A method for automatically abstracting visual documents

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.

    1994-01-01

    Visual documents--motion sequences on film, videotape, and digital recording--constitute a major source of information for the Space Agency, as well as all other government and private sector entities. This article describes a method for automatically selecting key frames from visual documents. These frames may in turn be used to represent the total image sequence of visual documents in visual libraries, hypermedia systems, and training algorithm reduces 51 minutes of video sequences to 134 frames; a reduction of information in the range of 700:1.

  2. Integration and visualization of systems biology data in context of the genome

    PubMed Central

    2010-01-01

    Background High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment. PMID:20642854

  3. LIFEdb: a database for functional genomics experiments integrating information from external sources, and serving as a sample tracking system

    PubMed Central

    Bannasch, Detlev; Mehrle, Alexander; Glatting, Karl-Heinz; Pepperkok, Rainer; Poustka, Annemarie; Wiemann, Stefan

    2004-01-01

    We have implemented LIFEdb (http://www.dkfz.de/LIFEdb) to link information regarding novel human full-length cDNAs generated and sequenced by the German cDNA Consortium with functional information on the encoded proteins produced in functional genomics and proteomics approaches. The database also serves as a sample-tracking system to manage the process from cDNA to experimental read-out and data interpretation. A web interface enables the scientific community to explore and visualize features of the annotated cDNAs and ORFs combined with experimental results, and thus helps to unravel new features of proteins with as yet unknown functions. PMID:14681468

  4. Numerical image manipulation and display in solar astronomy

    NASA Technical Reports Server (NTRS)

    Levine, R. H.; Flagg, J. C.

    1977-01-01

    The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.

  5. SpliceRover: Interpretable Convolutional Neural: Networks for Improved Splice Site Prediction.

    PubMed

    Zuallaert, Jasper; Godin, Fréderic; Kim, Mijung; Soete, Arne; Saeys, Yvan; De Neve, Wesley

    2018-06-21

    During the last decade, improvements in high-throughput sequencing have generated a wealth of genomic data. Functionally interpreting these sequences and finding the biological signals that are hallmarks of gene function and regulation is currently mostly done using automated genome annotation platforms, which mainly rely on integrated machine learning frameworks to identify different functional sites of interest, including splice sites. Splicing is an essential step in the gene regulation process, and the correct identification of splice sites is a major cornerstone in a genome annotation system. In this paper, we present SpliceRover, a predictive deep learning approach that outperforms the state-of-the-art in splice site prediction. SpliceRover uses convolutional neural networks (CNNs), which have been shown to obtain cutting edge performance on a wide variety of prediction tasks. We adapted this approach to deal with genomic sequence inputs, and show it consistently outperforms already existing approaches, with relative improvements in prediction effectiveness of up to 80.9% when measured in terms of false discovery rate. However, a major criticism of CNNs concerns their "black box" nature, as mechanisms to obtain insight into their reasoning processes are limited. To facilitate interpretability of the SpliceRover models, we introduce an approach to visualize the biologically relevant information learnt. We show that our visualization approach is able to recover features known to be important for splice site prediction (binding motifs around the splice site, presence of polypyrimidine tracts and branch points), as well as reveal new features (e.g., several types of exclusion patterns near splice sites). SpliceRover is available as a web service. The prediction tool and instructions can be found at http://bioit2.irc.ugent.be/splicerover/. Supplementary materials are available at Bioinformatics online.

  6. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection.

    PubMed

    Hayhoe, Mary M; Matthis, Jonathan Samir

    2018-08-06

    The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

  7. GFFview: A Web Server for Parsing and Visualizing Annotation Information of Eukaryotic Genome.

    PubMed

    Deng, Feilong; Chen, Shi-Yi; Wu, Zhou-Lin; Hu, Yongsong; Jia, Xianbo; Lai, Song-Jia

    2017-10-01

    Owing to wide application of RNA sequencing (RNA-seq) technology, more and more eukaryotic genomes have been extensively annotated, such as the gene structure, alternative splicing, and noncoding loci. Annotation information of genome is prevalently stored as plain text in General Feature Format (GFF), which could be hundreds or thousands Mb in size. Therefore, it is a challenge for manipulating GFF file for biologists who have no bioinformatic skill. In this study, we provide a web server (GFFview) for parsing the annotation information of eukaryotic genome and then generating statistical description of six indices for visualization. GFFview is very useful for investigating quality and difference of the de novo assembled transcriptome in RNA-seq studies.

  8. CasCADe: A Novel 4D Visualization System for Virtual Construction Planning.

    PubMed

    Ivson, Paulo; Nascimento, Daniel; Celes, Waldemar; Barbosa, Simone Dj

    2018-01-01

    Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.

  9. 'You see?' Teaching and learning how to interpret visual cues during surgery.

    PubMed

    Cope, Alexandra C; Bezemer, Jeff; Kneebone, Roger; Lingard, Lorelei

    2015-11-01

    The ability to interpret visual cues is important in many medical specialties, including surgery, in which poor outcomes are largely attributable to errors of perception rather than poor motor skills. However, we know little about how trainee surgeons learn to make judgements in the visual domain. We explored how trainees learn visual cue interpretation in the operating room. A multiple case study design was used. Participants were postgraduate surgical trainees and their trainers. Data included observer field notes, and integrated video- and audio-recordings from 12 cases representing more than 11 hours of observation. A constant comparative methodology was used to identify dominant themes. Visual cue interpretation was a recurrent feature of trainer-trainee interactions and was achieved largely through the pedagogic mechanism of co-construction. Co-construction was a dialogic sequence between trainer and trainee in which they explored what they were looking at together to identify and name structures or pathology. Co-construction took two forms: 'guided co-construction', in which the trainer steered the trainee to see what the trainer was seeing, and 'authentic co-construction', in which neither trainer nor trainee appeared certain of what they were seeing and pieced together the information collaboratively. Whether the co-construction activity was guided or authentic appeared to be influenced by case difficulty and trainee seniority. Co-construction was shown to occur verbally, through discussion, and also through non-verbal exchanges in which gestures made with laparoscopic instruments contributed to the co-construction discourse. In the training setting, learning visual cue interpretation occurs in part through co-construction. Co-construction is a pedagogic phenomenon that is well recognised in the context of learning to interpret verbal information. In articulating the features of co-construction in the visual domain, this work enables the development of explicit pedagogic strategies for maximising trainees' learning of visual cue interpretation. This is relevant to multiple medical specialties in which judgements must be based on visual information. © 2015 John Wiley & Sons Ltd.

  10. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization.

    PubMed

    Jung, Sang-Kyu; McDonald, Karen

    2011-08-16

    Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net.

  11. Visual gene developer: a fully programmable bioinformatics software for synthetic gene optimization

    PubMed Central

    2011-01-01

    Background Direct gene synthesis is becoming more popular owing to decreases in gene synthesis pricing. Compared with using natural genes, gene synthesis provides a good opportunity to optimize gene sequence for specific applications. In order to facilitate gene optimization, we have developed a stand-alone software called Visual Gene Developer. Results The software not only provides general functions for gene analysis and optimization along with an interactive user-friendly interface, but also includes unique features such as programming capability, dedicated mRNA secondary structure prediction, artificial neural network modeling, network & multi-threaded computing, and user-accessible programming modules. The software allows a user to analyze and optimize a sequence using main menu functions or specialized module windows. Alternatively, gene optimization can be initiated by designing a gene construct and configuring an optimization strategy. A user can choose several predefined or user-defined algorithms to design a complicated strategy. The software provides expandable functionality as platform software supporting module development using popular script languages such as VBScript and JScript in the software programming environment. Conclusion Visual Gene Developer is useful for both researchers who want to quickly analyze and optimize genes, and those who are interested in developing and testing new algorithms in bioinformatics. The software is available for free download at http://www.visualgenedeveloper.net. PMID:21846353

  12. GBshape: a genome browser database for DNA shape annotations

    PubMed Central

    Chiu, Tsu-Pei; Yang, Lin; Zhou, Tianyin; Main, Bradley J.; Parker, Stephen C.J.; Nuzhdin, Sergey V.; Tullius, Thomas D.; Rohs, Remo

    2015-01-01

    Many regulatory mechanisms require a high degree of specificity in protein-DNA binding. Nucleotide sequence does not provide an answer to the question of why a protein binds only to a small subset of the many putative binding sites in the genome that share the same core motif. Whereas higher-order effects, such as chromatin accessibility, cooperativity and cofactors, have been described, DNA shape recently gained attention as another feature that fine-tunes the DNA binding specificities of some transcription factor families. Our Genome Browser for DNA shape annotations (GBshape; freely available at http://rohslab.cmb.usc.edu/GBshape/) provides minor groove width, propeller twist, roll, helix twist and hydroxyl radical cleavage predictions for the entire genomes of 94 organisms. Additional genomes can easily be added using the GBshape framework. GBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution. As biological applications, we illustrate the periodicity of DNA shape features that are present in nucleosome-occupied sequences from human, fly and worm, and we demonstrate structural similarities between transcription start sites in the genomes of four Drosophila species. PMID:25326329

  13. Phylo-mLogo: an interactive and hierarchical multiple-logo visualization tool for alignment of many sequences

    PubMed Central

    Shih, Arthur Chun-Chieh; Lee, DT; Peng, Chin-Lin; Wu, Yu-Wei

    2007-01-01

    Background When aligning several hundreds or thousands of sequences, such as epidemic virus sequences or homologous/orthologous sequences of some big gene families, to reconstruct the epidemiological history or their phylogenies, how to analyze and visualize the alignment results of many sequences has become a new challenge for computational biologists. Although there are several tools available for visualization of very long sequence alignments, few of them are applicable to the alignments of many sequences. Results A multiple-logo alignment visualization tool, called Phylo-mLogo, is presented in this paper. Phylo-mLogo calculates the variabilities and homogeneities of alignment sequences by base frequencies or entropies. Different from the traditional representations of sequence logos, Phylo-mLogo not only displays the global logo patterns of the whole alignment of multiple sequences, but also demonstrates their local homologous logos for each clade hierarchically. In addition, Phylo-mLogo also allows the user to focus only on the analysis of some important, structurally or functionally constrained sites in the alignment selected by the user or by built-in automatic calculation. Conclusion With Phylo-mLogo, the user can symbolically and hierarchically visualize hundreds of aligned sequences simultaneously and easily check the changes of their amino acid sites when analyzing many homologous/orthologous or influenza virus sequences. More information of Phylo-mLogo can be found at URL . PMID:17319966

  14. ProfileGrids: a sequence alignment visualization paradigm that avoids the limitations of Sequence Logos.

    PubMed

    Roca, Alberto I

    2014-01-01

    The 2013 BioVis Contest provided an opportunity to evaluate different paradigms for visualizing protein multiple sequence alignments. Such data sets are becoming extremely large and thus taxing current visualization paradigms. Sequence Logos represent consensus sequences but have limitations for protein alignments. As an alternative, ProfileGrids are a new protein sequence alignment visualization paradigm that represents an alignment as a color-coded matrix of the residue frequency occurring at every homologous position in the aligned protein family. The JProfileGrid software program was used to analyze the BioVis contest data sets to generate figures for comparison with the Sequence Logo reference images. The ProfileGrid representation allows for the clear and effective analysis of protein multiple sequence alignments. This includes both a general overview of the conservation and diversity sequence patterns as well as the interactive ability to query the details of the protein residue distributions in the alignment. The JProfileGrid software is free and available from http://www.ProfileGrid.org.

  15. Visual object tracking by correlation filters and online learning

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Xia, Gui-Song; Lu, Qikai; Shen, Weiming; Zhang, Liangpei

    2018-06-01

    Due to the complexity of background scenarios and the variation of target appearance, it is difficult to achieve high accuracy and fast speed for object tracking. Currently, correlation filters based trackers (CFTs) show promising performance in object tracking. The CFTs estimate the target's position by correlation filters with different kinds of features. However, most of CFTs can hardly re-detect the target in the case of long-term tracking drifts. In this paper, a feature integration object tracker named correlation filters and online learning (CFOL) is proposed. CFOL estimates the target's position and its corresponding correlation score using the same discriminative correlation filter with multi-features. To reduce tracking drifts, a new sampling and updating strategy for online learning is proposed. Experiments conducted on 51 image sequences demonstrate that the proposed algorithm is superior to the state-of-the-art approaches.

  16. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    PubMed

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. SVGenes: a library for rendering genomic features in scalable vector graphic format.

    PubMed

    Etherington, Graham J; MacLean, Daniel

    2013-08-01

    Drawing genomic features in attractive and informative ways is a key task in visualization of genomics data. Scalable Vector Graphics (SVG) format is a modern and flexible open standard that provides advanced features including modular graphic design, advanced web interactivity and animation within a suitable client. SVGs do not suffer from loss of image quality on re-scaling and provide the ability to edit individual elements of a graphic on the whole object level independent of the whole image. These features make SVG a potentially useful format for the preparation of publication quality figures including genomic objects such as genes or sequencing coverage and for web applications that require rich user-interaction with the graphical elements. SVGenes is a Ruby-language library that uses SVG primitives to render typical genomic glyphs through a simple and flexible Ruby interface. The library implements a simple Page object that spaces and contains horizontal Track objects that in turn style, colour and positions features within them. Tracks are the level at which visual information is supplied providing the full styling capability of the SVG standard. Genomic entities like genes, transcripts and histograms are modelled in Glyph objects that are attached to a track and take advantage of SVG primitives to render the genomic features in a track as any of a selection of defined glyphs. The feature model within SVGenes is simple but flexible and not dependent on particular existing gene feature formats meaning graphics for any existing datasets can easily be created without need for conversion. The library is provided as a Ruby Gem from https://rubygems.org/gems/bio-svgenes under the MIT license, and open source code is available at https://github.com/danmaclean/bioruby-svgenes also under the MIT License. dan.maclean@tsl.ac.uk.

  18. Infrared maritime target detection using a probabilistic single Gaussian model of sea clutter in Fourier domain

    NASA Astrophysics Data System (ADS)

    Zhou, Anran; Xie, Weixin; Pei, Jihong; Chen, Yapei

    2018-02-01

    For ship targets detection in cluttered infrared image sequences, a robust detection method, based on the probabilistic single Gaussian model of sea background in Fourier domain, is put forward. The amplitude spectrum sequences at each frequency point of the pure seawater images in Fourier domain, being more stable than the gray value sequences of each background pixel in the spatial domain, are regarded as a Gaussian model. Next, a probability weighted matrix is built based on the stability of the pure seawater's total energy spectrum in the row direction, to make the Gaussian model more accurate. Then, the foreground frequency points are separated from the background frequency points by the model. Finally, the false-alarm points are removed utilizing ships' shape features. The performance of the proposed method is tested by visual and quantitative comparisons with others.

  19. Temporal properties of material categorization and material rating: visual vs non-visual material features.

    PubMed

    Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki

    2015-10-01

    Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Storage of features, conjunctions and objects in visual working memory.

    PubMed

    Vogel, E K; Woodman, G F; Luck, S J

    2001-02-01

    Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.

  1. Visual attention to features by associative learning.

    PubMed

    Gozli, Davood G; Moskowitz, Joshua B; Pratt, Jay

    2014-11-01

    Expecting a particular stimulus can facilitate processing of that stimulus over others, but what is the fate of other stimuli that are known to co-occur with the expected stimulus? This study examined the impact of learned association on feature-based attention. The findings show that the effectiveness of an uninformative color transient in orienting attention can change by learned associations between colors and the expected target shape. In an initial acquisition phase, participants learned two distinct sequences of stimulus-response-outcome, where stimuli were defined by shape ('S' vs. 'H'), responses were localized key-presses (left vs. right), and outcomes were colors (red vs. green). Next, in a test phase, while expecting a target shape (80% probable), participants showed reliable attentional orienting to the color transient associated with the target shape, and showed no attentional orienting with the color associated with the alternative target shape. This bias seemed to be driven by learned association between shapes and colors, and not modulated by the response. In addition, the bias seemed to depend on observing target-color conjunctions, since encountering the two features disjunctively (without spatiotemporal overlap) did not replicate the findings. We conclude that associative learning - likely mediated by mechanisms underlying visual object representation - can extend the impact of goal-driven attention to features associated with a target stimulus. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Phylo-VISTA: Interactive visualization of multiple DNA sequence alignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Nameeta; Couronne, Olivier; Pennacchio, Len A.

    The power of multi-sequence comparison for biological discovery is well established. The need for new capabilities to visualize and compare cross-species alignment data is intensified by the growing number of genomic sequence datasets being generated for an ever-increasing number of organisms. To be efficient these visualization algorithms must support the ability to accommodate consistently a wide range of evolutionary distances in a comparison framework based upon phylogenetic relationships. Results: We have developed Phylo-VISTA, an interactive tool for analyzing multiple alignments by visualizing a similarity measure for multiple DNA sequences. The complexity of visual presentation is effectively organized using a frameworkmore » based upon interspecies phylogenetic relationships. The phylogenetic organization supports rapid, user-guided interspecies comparison. To aid in navigation through large sequence datasets, Phylo-VISTA leverages concepts from VISTA that provide a user with the ability to select and view data at varying resolutions. The combination of multiresolution data visualization and analysis, combined with the phylogenetic framework for interspecies comparison, produces a highly flexible and powerful tool for visual data analysis of multiple sequence alignments. Availability: Phylo-VISTA is available at http://www-gsd.lbl. gov/phylovista. It requires an Internet browser with Java Plugin 1.4.2 and it is integrated into the global alignment program LAGAN at http://lagan.stanford.edu« less

  3. Retrospective comparison of three-dimensional imaging sequences in the visualization of posterior fossa cranial nerves.

    PubMed

    Ors, Suna; Inci, Ercan; Turkay, Rustu; Kokurcan, Atilla; Hocaoglu, Elif

    2017-12-01

    To compare efficancy of three-dimentional SPACE (sampling perfection with application-optimized contrasts using different flip-angle evolutions) and CISS (constructive interference in steady state) sequences in the imaging of the cisternal segments of cranial nerves V-XII. Temporal MRI scans from 50 patients (F:M ratio, 27:23; mean age, 44.5±15.9 years) admitted to our hospital with vertigo, tinnitus, and hearing loss were retrospectively analyzed. All patients had both CISS and SPACE sequences. Quantitative analysis of SPACE and CISS sequences was performed by measuring the ventricle-to-parenchyma contrast-to-noise ratio (CNR). Qualitative analysis of differences in visualization capability, image quality, and severity of artifacts was also conducted. A score ranging 'no artefact' to 'severe artefacts and unreadable' was used for the assessment of artifacts and from 'not visualized' to 'completely visualized' for the assesment of image quality, respectively. The distribution of variables was controlled by the Kolmogorov-Smirnov test. Samples t-test and McNemar's test were used to determine statistical significance. Rates of visualization of posterior fossa cranial nerves in cases of complete visualization were as follows: nerve V (100% for both sequences), nerve VI (94% in SPACE, 86% in CISS sequences), nerves VII-VIII (100% for both sequences), IX-XI nerve complex (96%, 88%); nerve XII (58%, 46%) (p<0.05). SPACE sequences showed fewer artifacts than CISS sequences (p<0.002). Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Bioreplicated visual features of nanofabricated buprestid beetle decoys evoke stereotypical male mating flights

    PubMed Central

    Domingue, Michael J.; Lakhtakia, Akhlesh; Pulsifer, Drew P.; Hall, Loyal P.; Badding, John V.; Bischof, Jesse L.; Martín-Palma, Raúl J.; Imrei, Zoltán; Janik, Gergely; Mastro, Victor C.; Hazen, Missy; Baker, Thomas C.

    2014-01-01

    Recent advances in nanoscale bioreplication processes present the potential for novel basic and applied research into organismal behavioral processes. Insect behavior potentially could be affected by physical features existing at the nanoscale level. We used nano-bioreplicated visual decoys of female emerald ash borer beetles (Agrilus planipennis) to evoke stereotypical mate-finding behavior, whereby males fly to and alight on the decoys as they would on real females. Using an industrially scalable nanomolding process, we replicated and evaluated the importance of two features of the outer cuticular surface of the beetle’s wings: structural interference coloration of the elytra by multilayering of the epicuticle and fine-scale surface features consisting of spicules and spines that scatter light into intense strands. Two types of decoys that lacked one or both of these elements were fabricated, one type nano-bioreplicated and the other 3D-printed with no bioreplicated surface nanostructural elements. Both types were colored with green paint. The light-scattering properties of the nano-bioreplicated surfaces were verified by shining a white laser on the decoys in a dark room and projecting the scattering pattern onto a white surface. Regardless of the coloration mechanism, the nano-bioreplicated decoys evoked the complete attraction and landing sequence of Agrilus males. In contrast, males made brief flying approaches toward the decoys without nanostructured features, but diverted away before alighting on them. The nano-bioreplicated decoys were also electroconductive, a feature used on traps such that beetles alighting onto them were stunned, killed, and collected. PMID:25225359

  5. Self-Organizing Hidden Markov Model Map (SOHMMM): Biological Sequence Clustering and Cluster Visualization.

    PubMed

    Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa

    2017-01-01

    The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.

  6. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects

    PubMed Central

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R.; Jafari, Amir H.

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies. PMID:29892219

  7. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects.

    PubMed

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23-30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies.

  8. Visual management of large scale data mining projects.

    PubMed

    Shah, I; Hunter, L

    2000-01-01

    This paper describes a unified framework for visualizing the preparations for, and results of, hundreds of machine learning experiments. These experiments were designed to improve the accuracy of enzyme functional predictions from sequence, and in many cases were successful. Our system provides graphical user interfaces for defining and exploring training datasets and various representational alternatives, for inspecting the hypotheses induced by various types of learning algorithms, for visualizing the global results, and for inspecting in detail results for specific training sets (functions) and examples (proteins). The visualization tools serve as a navigational aid through a large amount of sequence data and induced knowledge. They provided significant help in understanding both the significance and the underlying biological explanations of our successes and failures. Using these visualizations it was possible to efficiently identify weaknesses of the modular sequence representations and induction algorithms which suggest better learning strategies. The context in which our data mining visualization toolkit was developed was the problem of accurately predicting enzyme function from protein sequence data. Previous work demonstrated that approximately 6% of enzyme protein sequences are likely to be assigned incorrect functions on the basis of sequence similarity alone. In order to test the hypothesis that more detailed sequence analysis using machine learning techniques and modular domain representations could address many of these failures, we designed a series of more than 250 experiments using information-theoretic decision tree induction and naive Bayesian learning on local sequence domain representations of problematic enzyme function classes. In more than half of these cases, our methods were able to perfectly discriminate among various possible functions of similar sequences. We developed and tested our visualization techniques on this application.

  9. Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness

    PubMed Central

    Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.

    2017-01-01

    Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158

  10. Zipper plot: visualizing transcriptional activity of genomic regions.

    PubMed

    Avila Cobos, Francisco; Anckaert, Jasper; Volders, Pieter-Jan; Everaert, Celine; Rombaut, Dries; Vandesompele, Jo; De Preter, Katleen; Mestdagh, Pieter

    2017-05-02

    Reconstructing transcript models from RNA-sequencing (RNA-seq) data and establishing these as independent transcriptional units can be a challenging task. Current state-of-the-art tools for long non-coding RNA (lncRNA) annotation are mainly based on evolutionary constraints, which may result in false negatives due to the overall limited conservation of lncRNAs. To tackle this problem we have developed the Zipper plot, a novel visualization and analysis method that enables users to simultaneously interrogate thousands of human putative transcription start sites (TSSs) in relation to various features that are indicative for transcriptional activity. These include publicly available CAGE-sequencing, ChIP-sequencing and DNase-sequencing datasets. Our method only requires three tab-separated fields (chromosome, genomic coordinate of the TSS and strand) as input and generates a report that includes a detailed summary table, a Zipper plot and several statistics derived from this plot. Using the Zipper plot, we found evidence of transcription for a set of well-characterized lncRNAs and observed that fewer mono-exonic lncRNAs have CAGE peaks overlapping with their TSSs compared to multi-exonic lncRNAs. Using publicly available RNA-seq data, we found more than one hundred cases where junction reads connected protein-coding gene exons with a downstream mono-exonic lncRNA, revealing the need for a careful evaluation of lncRNA 5'-boundaries. Our method is implemented using the statistical programming language R and is freely available as a webtool.

  11. Research on metallic material defect detection based on bionic sensing of human visual properties

    NASA Astrophysics Data System (ADS)

    Zhang, Pei Jiang; Cheng, Tao

    2018-05-01

    Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.

  12. Getting a cue before getting a clue: Event-related potentials to inference in visual narrative comprehension

    PubMed Central

    Cohn, Neil; Kutas, Marta

    2015-01-01

    Inference has long been emphasized in the comprehension of verbal and visual narratives. Here, we measured event-related brain potentials to visual sequences designed to elicit inferential processing. In Impoverished sequences, an expressionless “onlooker” watches an undepicted event (e.g., person throws a ball for a dog, then watches the dog chase it) just prior to a surprising finale (e.g., someone else returns the ball), which should lead to an inference (i.e., the different person retrieved the ball). Implied sequences alter this narrative structure by adding visual cues to the critical panel such as a surprised facial expression to the onlooker implying they saw an unexpected, albeit undepicted, event. In contrast, Expected sequences show a predictable, but then confounded, event (i.e., dog retrieves ball, then different person returns it), and Explicit sequences depict the unexpected event (i.e., different person retrieves then returns ball). At the critical penultimate panel, sequences representing depicted events (Explicit, Expected) elicited a larger posterior positivity (P600) than the relatively passive events of an onlooker (Impoverished, Implied), though Implied sequences were slightly more positive than Impoverished sequences. At the subsequent and final panel, a posterior positivity (P600) was greater to images in Impoverished sequences than those in Explicit and Implied sequences, which did not differ. In addition, both sequence types requiring inference (Implied, Impoverished) elicited a larger frontal negativity than those explicitly depicting events (Expected, Explicit). These results show that neural processing differs for visual narratives omitting events versus those depicting events, and that the presence of subtle visual cues can modulate such effects presumably by altering narrative structure. PMID:26320706

  13. Observers' cognitive states modulate how visual inputs relate to gaze control.

    PubMed

    Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G

    2016-09-01

    Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. ProfileGrids: a sequence alignment visualization paradigm that avoids the limitations of Sequence Logos

    PubMed Central

    2014-01-01

    Background The 2013 BioVis Contest provided an opportunity to evaluate different paradigms for visualizing protein multiple sequence alignments. Such data sets are becoming extremely large and thus taxing current visualization paradigms. Sequence Logos represent consensus sequences but have limitations for protein alignments. As an alternative, ProfileGrids are a new protein sequence alignment visualization paradigm that represents an alignment as a color-coded matrix of the residue frequency occurring at every homologous position in the aligned protein family. Results The JProfileGrid software program was used to analyze the BioVis contest data sets to generate figures for comparison with the Sequence Logo reference images. Conclusions The ProfileGrid representation allows for the clear and effective analysis of protein multiple sequence alignments. This includes both a general overview of the conservation and diversity sequence patterns as well as the interactive ability to query the details of the protein residue distributions in the alignment. The JProfileGrid software is free and available from http://www.ProfileGrid.org. PMID:25237393

  15. Visual affective classification by combining visual and text features.

    PubMed

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task.

  16. Visual affective classification by combining visual and text features

    PubMed Central

    Liu, Ningning; Wang, Kai; Jin, Xin; Gao, Boyang; Dellandréa, Emmanuel; Chen, Liming

    2017-01-01

    Affective analysis of images in social networks has drawn much attention, and the texts surrounding images are proven to provide valuable semantic meanings about image content, which can hardly be represented by low-level visual features. In this paper, we propose a novel approach for visual affective classification (VAC) task. This approach combines visual representations along with novel text features through a fusion scheme based on Dempster-Shafer (D-S) Evidence Theory. Specifically, we not only investigate different types of visual features and fusion methods for VAC, but also propose textual features to effectively capture emotional semantics from the short text associated to images based on word similarity. Experiments are conducted on three public available databases: the International Affective Picture System (IAPS), the Artistic Photos and the MirFlickr Affect set. The results demonstrate that the proposed approach combining visual and textual features provides promising results for VAC task. PMID:28850566

  17. News video story segmentation method using fusion of audio-visual features

    NASA Astrophysics Data System (ADS)

    Wen, Jun; Wu, Ling-da; Zeng, Pu; Luan, Xi-dao; Xie, Yu-xiang

    2007-11-01

    News story segmentation is an important aspect for news video analysis. This paper presents a method for news video story segmentation. Different form prior works, which base on visual features transform, the proposed technique uses audio features as baseline and fuses visual features with it to refine the results. At first, it selects silence clips as audio features candidate points, and selects shot boundaries and anchor shots as two kinds of visual features candidate points. Then this paper selects audio feature candidates as cues and develops different fusion method, which effectively using diverse type visual candidates to refine audio candidates, to get story boundaries. Experiment results show that this method has high efficiency and adaptability to different kinds of news video.

  18. Feature precedence in processing multifeature visual information in the human brain: an event-related potential study.

    PubMed

    Liu, B; Meng, X; Wu, G; Huang, Y

    2012-05-17

    In this article, we aimed to study whether feature precedence existed in the cognitive processing of multifeature visual information in the human brain. In our experiment, we paid attention to two important visual features as follows: color and shape. In order to avoid the presence of semantic constraints between them and the resulting impact, pure color and simple geometric shape were chosen as the color feature and shape feature of visual stimulus, respectively. We adopted an "old/new" paradigm to study the cognitive processing of color feature, shape feature and the combination of color feature and shape feature, respectively. The experiment consisted of three tasks as follows: Color task, Shape task and Color-Shape task. The results showed that the feature-based pattern would be activated in the human brain in processing multifeature visual information without semantic association between features. Furthermore, shape feature was processed earlier than color feature, and the cognitive processing of color feature was more difficult than that of shape feature. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  19. Foldit Standalone: a video game-derived protein structure manipulation interface using Rosetta

    PubMed Central

    Kleffner, Robert; Flatten, Jeff; Leaver-Fay, Andrew; Baker, David; Siegel, Justin B.; Khatib, Firas; Cooper, Seth

    2017-01-01

    Abstract Summary: Foldit Standalone is an interactive graphical interface to the Rosetta molecular modeling package. In contrast to most command-line or batch interactions with Rosetta, Foldit Standalone is designed to allow easy, real-time, direct manipulation of protein structures, while also giving access to the extensive power of Rosetta computations. Derived from the user interface of the scientific discovery game Foldit (itself based on Rosetta), Foldit Standalone has added more advanced features and removed the competitive game elements. Foldit Standalone was built from the ground up with a custom rendering and event engine, configurable visualizations and interactions driven by Rosetta. Foldit Standalone contains, among other features: electron density and contact map visualizations, multiple sequence alignment tools for template-based modeling, rigid body transformation controls, RosettaScripts support and an embedded Lua interpreter. Availability and Implementation: Foldit Standalone is available for download at https://fold.it/standalone, under the Rosetta license, which is free for academic and non-profit users. It is implemented in cross-platform C ++ and binary executables are available for Windows, macOS and Linux. Contact: scooper@ccs.neu.edu PMID:28481970

  20. Bodily-visual practices and turn continuation

    PubMed Central

    Ford, Cecilia E.; Thompson, Sandra A.; Drake, Veronika

    2012-01-01

    This paper considers points in turn construction where conversation researchers have shown that talk routinely continues beyond possible turn completion, but where we find bodily-visual behavior doing such turn extension work. The bodily-visual behaviors we examine share many features with verbal turn extensions, but we argue that embodied movements have distinct properties that make them well-suited for specific kinds of social action, including stance display and by-play in sequences framed as subsidiary to a simultaneous and related verbal exchange. Our study is in line with a research agenda taking seriously the point made by Goodwin (2000a, b, 2003), Hayashi (2003, 2005), Iwasaki (2009), and others that scholars seeking to account for practices in language and social interaction do themselves a disservice if they privilege the verbal dimension; rather, as suggested in Stivers/Sidnell (2005), each semiotic system/modality, while coordinated with others, has its own organization. With the current exploration of bodily-visual turn extensions, we hope to contribute to a growing understanding of how these different modes of organization are managed concurrently and in concert by interactants in carrying out their everyday social actions. PMID:23526861

  1. Feature-based attentional modulations in the absence of direct visual stimulation.

    PubMed

    Serences, John T; Boynton, Geoffrey M

    2007-07-19

    When faced with a crowded visual scene, observers must selectively attend to behaviorally relevant objects to avoid sensory overload. Often this selection process is guided by prior knowledge of a target-defining feature (e.g., the color red when looking for an apple), which enhances the firing rate of visual neurons that are selective for the attended feature. Here, we used functional magnetic resonance imaging and a pattern classification algorithm to predict the attentional state of human observers as they monitored a visual feature (one of two directions of motion). We find that feature-specific attention effects spread across the visual field-even to regions of the scene that do not contain a stimulus. This spread of feature-based attention to empty regions of space may facilitate the perception of behaviorally relevant stimuli by increasing sensitivity to attended features at all locations in the visual field.

  2. Why UV vision and red vision are important for damselfish (Pomacentridae): structural and expression variation in opsin genes.

    PubMed

    Stieb, Sara M; Cortesi, Fabio; Sueess, Lorenz; Carleton, Karen L; Salzburger, Walter; Marshall, N J

    2017-03-01

    Coral reefs belong to the most diverse ecosystems on our planet. The diversity in coloration and lifestyles of coral reef fishes makes them a particularly promising system to study the role of visual communication and adaptation. Here, we investigated the evolution of visual pigment genes (opsins) in damselfish (Pomacentridae) and examined whether structural and expression variation of opsins can be linked to ecology. Using DNA sequence data of a phylogenetically representative set of 31 damselfish species, we show that all but one visual opsin are evolving under positive selection. In addition, selection on opsin tuning sites, including cases of divergent, parallel, convergent and reversed evolution, has been strong throughout the radiation of damselfish, emphasizing the importance of visual tuning for this group. The highest functional variation in opsin protein sequences was observed in the short- followed by the long-wavelength end of the visual spectrum. Comparative gene expression analyses of a subset of the same species revealed that with SWS1, RH2B and RH2A always being expressed, damselfish use an overall short-wavelength shifted expression profile. Interestingly, not only did all species express SWS1 - a UV-sensitive opsin - and possess UV-transmitting lenses, most species also feature UV-reflective body parts. This suggests that damsels might benefit from a close-range UV-based 'private' communication channel, which is likely to be hidden from 'UV-blind' predators. Finally, we found that LWS expression is highly correlated to feeding strategy in damsels with herbivorous feeders having an increased LWS expression, possibly enhancing the detection of benthic algae. © 2016 John Wiley & Sons Ltd.

  3. Relating interesting quantitative time series patterns with text events and text features

    NASA Astrophysics Data System (ADS)

    Wanner, Franz; Schreck, Tobias; Jentner, Wolfgang; Sharalieva, Lyubka; Keim, Daniel A.

    2013-12-01

    In many application areas, the key to successful data analysis is the integrated analysis of heterogeneous data. One example is the financial domain, where time-dependent and highly frequent quantitative data (e.g., trading volume and price information) and textual data (e.g., economic and political news reports) need to be considered jointly. Data analysis tools need to support an integrated analysis, which allows studying the relationships between textual news documents and quantitative properties of the stock market price series. In this paper, we describe a workflow and tool that allows a flexible formation of hypotheses about text features and their combinations, which reflect quantitative phenomena observed in stock data. To support such an analysis, we combine the analysis steps of frequent quantitative and text-oriented data using an existing a-priori method. First, based on heuristics we extract interesting intervals and patterns in large time series data. The visual analysis supports the analyst in exploring parameter combinations and their results. The identified time series patterns are then input for the second analysis step, in which all identified intervals of interest are analyzed for frequent patterns co-occurring with financial news. An a-priori method supports the discovery of such sequential temporal patterns. Then, various text features like the degree of sentence nesting, noun phrase complexity, the vocabulary richness, etc. are extracted from the news to obtain meta patterns. Meta patterns are defined by a specific combination of text features which significantly differ from the text features of the remaining news data. Our approach combines a portfolio of visualization and analysis techniques, including time-, cluster- and sequence visualization and analysis functionality. We provide two case studies, showing the effectiveness of our combined quantitative and textual analysis work flow. The workflow can also be generalized to other application domains such as data analysis of smart grids, cyber physical systems or the security of critical infrastructure, where the data consists of a combination of quantitative and textual time series data.

  4. Considerations in video playback design: using optic flow analysis to examine motion characteristics of live and computer-generated animation sequences.

    PubMed

    Woo, Kevin L; Rieucau, Guillaume

    2008-07-01

    The increasing use of the video playback technique in behavioural ecology reveals a growing need to ensure better control of the visual stimuli that focal animals experience. Technological advances now allow researchers to develop computer-generated animations instead of using video sequences of live-acting demonstrators. However, care must be taken to match the motion characteristics (speed and velocity) of the animation to the original video source. Here, we presented a tool based on the use of an optic flow analysis program to measure the resemblance of motion characteristics of computer-generated animations compared to videos of live-acting animals. We examined three distinct displays (tail-flick (TF), push-up body rock (PUBR), and slow arm wave (SAW)) exhibited by animations of Jacky dragons (Amphibolurus muricatus) that were compared to the original video sequences of live lizards. We found no significant differences between the motion characteristics of videos and animations across all three displays. Our results showed that our animations are similar the speed and velocity features of each display. Researchers need to ensure that similar motion characteristics in animation and video stimuli are represented, and this feature is a critical component in the future success of the video playback technique.

  5. Reduction of Left Visual Field Lexical Decision Accuracy as a Result of Concurrent Nonverbal Auditory Stimulation

    ERIC Educational Resources Information Center

    Van Strien, Jan W.

    2004-01-01

    To investigate whether concurrent nonverbal sound sequences would affect visual-hemifield lexical processing, lexical-decision performance of 24 strongly right-handed students (12 men, 12 women) was measured in three conditions: baseline, concurrent neutral sound sequence, and concurrent emotional sound sequence. With the neutral sequence,…

  6. Caudate nucleus reactivity predicts perceptual learning rate for visual feature conjunctions.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2015-04-15

    Useful information in the visual environment is often contained in specific conjunctions of visual features (e.g., color and shape). The ability to quickly and accurately process such conjunctions can be learned. However, the neural mechanisms responsible for such learning remain largely unknown. It has been suggested that some forms of visual learning might involve the dopaminergic neuromodulatory system (Roelfsema et al., 2010; Seitz and Watanabe, 2005), but this hypothesis has not yet been directly tested. Here we test the hypothesis that learning visual feature conjunctions involves the dopaminergic system, using functional neuroimaging, genetic assays, and behavioral testing techniques. We use a correlative approach to evaluate potential associations between individual differences in visual feature conjunction learning rate and individual differences in dopaminergic function as indexed by neuroimaging and genetic markers. We find a significant correlation between activity in the caudate nucleus (a component of the dopaminergic system connected to visual areas of the brain) and visual feature conjunction learning rate. Specifically, individuals who showed a larger difference in activity between positive and negative feedback on an unrelated cognitive task, indicative of a more reactive dopaminergic system, learned visual feature conjunctions more quickly than those who showed a smaller activity difference. This finding supports the hypothesis that the dopaminergic system is involved in visual learning, and suggests that visual feature conjunction learning could be closely related to associative learning. However, no significant, reliable correlations were found between feature conjunction learning and genotype or dopaminergic activity in any other regions of interest. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Short-term retention of visual information: Evidence in support of feature-based attention as an underlying mechanism.

    PubMed

    Sneve, Markus H; Sreenivasan, Kartik K; Alnæs, Dag; Endestad, Tor; Magnussen, Svein

    2015-01-01

    Retention of features in visual short-term memory (VSTM) involves maintenance of sensory traces in early visual cortex. However, the mechanism through which this is accomplished is not known. Here, we formulate specific hypotheses derived from studies on feature-based attention to test the prediction that visual cortex is recruited by attentional mechanisms during VSTM of low-level features. Functional magnetic resonance imaging (fMRI) of human visual areas revealed that neural populations coding for task-irrelevant feature information are suppressed during maintenance of detailed spatial frequency memory representations. The narrow spectral extent of this suppression agrees well with known effects of feature-based attention. Additionally, analyses of effective connectivity during maintenance between retinotopic areas in visual cortex show that the observed highlighting of task-relevant parts of the feature spectrum originates in V4, a visual area strongly connected with higher-level control regions and known to convey top-down influence to earlier visual areas during attentional tasks. In line with this property of V4 during attentional operations, we demonstrate that modulations of earlier visual areas during memory maintenance have behavioral consequences, and that these modulations are a result of influences from V4. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex

    PubMed Central

    Summerfield, Christopher; Egner, Tobias

    2016-01-01

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936

  9. Object-based attention underlies the rehearsal of feature binding in visual working memory.

    PubMed

    Shen, Mowei; Huang, Xiang; Gao, Zaifeng

    2015-04-01

    Feature binding is a core concept in many research fields, including the study of working memory (WM). Over the past decade, it has been debated whether keeping the feature binding in visual WM consumes more visual attention than the constituent single features. Previous studies have only explored the contribution of domain-general attention or space-based attention in the binding process; no study so far has explored the role of object-based attention in retaining binding in visual WM. We hypothesized that object-based attention underlay the mechanism of rehearsing feature binding in visual WM. Therefore, during the maintenance phase of a visual WM task, we inserted a secondary mental rotation (Experiments 1-3), transparent motion (Experiment 4), or an object-based feature report task (Experiment 5) to consume the object-based attention available for binding. In line with the prediction of the object-based attention hypothesis, Experiments 1-5 revealed a more significant impairment for binding than for constituent single features. However, this selective binding impairment was not observed when inserting a space-based visual search task (Experiment 6). We conclude that object-based attention underlies the rehearsal of binding representation in visual WM. (c) 2015 APA, all rights reserved.

  10. Considering the Lives of Microbes in Microbial Communities.

    PubMed

    Shank, Elizabeth A

    2018-01-01

    Over the last decades, sequencing technologies have transformed our ability to investigate the composition and functional capacity of microbial communities. Even so, critical questions remain about these complex systems that cannot be addressed by the bulk, community-averaged data typically provided by sequencing methods. In this Perspective, I propose that future advances in microbiome research will emerge from considering "the lives of microbes": we need to create methods to explicitly interrogate how microbes exist and interact in native-setting-like microenvironments. This approach includes developing approaches that expose the phenotypic heterogeneity of microbes; exploring the effects of coculture cues on cellular differentiation and metabolite production; and designing visualization systems that capture features of native microbial environments while permitting the nondestructive observation of microbial interactions over space and time with single-cell resolution.

  11. Web3DMol: interactive protein structure visualization based on WebGL.

    PubMed

    Shi, Maoxiang; Gao, Juntao; Zhang, Michael Q

    2017-07-03

    A growing number of web-based databases and tools for protein research are being developed. There is now a widespread need for visualization tools to present the three-dimensional (3D) structure of proteins in web browsers. Here, we introduce our 3D modeling program-Web3DMol-a web application focusing on protein structure visualization in modern web browsers. Users submit a PDB identification code or select a PDB archive from their local disk, and Web3DMol will display and allow interactive manipulation of the 3D structure. Featured functions, such as sequence plot, fragment segmentation, measure tool and meta-information display, are offered for users to gain a better understanding of protein structure. Easy-to-use APIs are available for developers to reuse and extend Web3DMol. Web3DMol can be freely accessed at http://web3dmol.duapp.com/, and the source code is distributed under the MIT license. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. The UCSC Genome Browser database: extensions and updates 2013.

    PubMed

    Meyer, Laurence R; Zweig, Ann S; Hinrichs, Angie S; Karolchik, Donna; Kuhn, Robert M; Wong, Matthew; Sloan, Cricket A; Rosenbloom, Kate R; Roe, Greg; Rhead, Brooke; Raney, Brian J; Pohl, Andy; Malladi, Venkat S; Li, Chin H; Lee, Brian T; Learned, Katrina; Kirkup, Vanessa; Hsu, Fan; Heitner, Steve; Harte, Rachel A; Haeussler, Maximilian; Guruvadoo, Luvina; Goldman, Mary; Giardine, Belinda M; Fujita, Pauline A; Dreszer, Timothy R; Diekhans, Mark; Cline, Melissa S; Clawson, Hiram; Barber, Galt P; Haussler, David; Kent, W James

    2013-01-01

    The University of California Santa Cruz (UCSC) Genome Browser (http://genome.ucsc.edu) offers online public access to a growing database of genomic sequence and annotations for a wide variety of organisms. The Browser is an integrated tool set for visualizing, comparing, analysing and sharing both publicly available and user-generated genomic datasets. As of September 2012, genomic sequence and a basic set of annotation 'tracks' are provided for 63 organisms, including 26 mammals, 13 non-mammal vertebrates, 3 invertebrate deuterostomes, 13 insects, 6 worms, yeast and sea hare. In the past year 19 new genome assemblies have been added, and we anticipate releasing another 28 in early 2013. Further, a large number of annotation tracks have been either added, updated by contributors or remapped to the latest human reference genome. Among these are an updated UCSC Genes track for human and mouse assemblies. We have also introduced several features to improve usability, including new navigation menus. This article provides an update to the UCSC Genome Browser database, which has been previously featured in the Database issue of this journal.

  13. FMAP: Functional Mapping and Analysis Pipeline for metagenomics and metatranscriptomics studies.

    PubMed

    Kim, Jiwoong; Kim, Min Soo; Koh, Andrew Y; Xie, Yang; Zhan, Xiaowei

    2016-10-10

    Given the lack of a complete and comprehensive library of microbial reference genomes, determining the functional profile of diverse microbial communities is challenging. The available functional analysis pipelines lack several key features: (i) an integrated alignment tool, (ii) operon-level analysis, and (iii) the ability to process large datasets. Here we introduce our open-sourced, stand-alone functional analysis pipeline for analyzing whole metagenomic and metatranscriptomic sequencing data, FMAP (Functional Mapping and Analysis Pipeline). FMAP performs alignment, gene family abundance calculations, and statistical analysis (three levels of analyses are provided: differentially-abundant genes, operons and pathways). The resulting output can be easily visualized with heatmaps and functional pathway diagrams. FMAP functional predictions are consistent with currently available functional analysis pipelines. FMAP is a comprehensive tool for providing functional analysis of metagenomic/metatranscriptomic sequencing data. With the added features of integrated alignment, operon-level analysis, and the ability to process large datasets, FMAP will be a valuable addition to the currently available functional analysis toolbox. We believe that this software will be of great value to the wider biology and bioinformatics communities.

  14. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    PubMed Central

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  15. GBshape: a genome browser database for DNA shape annotations.

    PubMed

    Chiu, Tsu-Pei; Yang, Lin; Zhou, Tianyin; Main, Bradley J; Parker, Stephen C J; Nuzhdin, Sergey V; Tullius, Thomas D; Rohs, Remo

    2015-01-01

    Many regulatory mechanisms require a high degree of specificity in protein-DNA binding. Nucleotide sequence does not provide an answer to the question of why a protein binds only to a small subset of the many putative binding sites in the genome that share the same core motif. Whereas higher-order effects, such as chromatin accessibility, cooperativity and cofactors, have been described, DNA shape recently gained attention as another feature that fine-tunes the DNA binding specificities of some transcription factor families. Our Genome Browser for DNA shape annotations (GBshape; freely available at http://rohslab.cmb.usc.edu/GBshape/) provides minor groove width, propeller twist, roll, helix twist and hydroxyl radical cleavage predictions for the entire genomes of 94 organisms. Additional genomes can easily be added using the GBshape framework. GBshape can be used to visualize DNA shape annotations qualitatively in a genome browser track format, and to download quantitative values of DNA shape features as a function of genomic position at nucleotide resolution. As biological applications, we illustrate the periodicity of DNA shape features that are present in nucleosome-occupied sequences from human, fly and worm, and we demonstrate structural similarities between transcription start sites in the genomes of four Drosophila species. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Action starring narratives and events: Structure and inference in visual narrative comprehension

    PubMed Central

    Cohn, Neil; Wittenberg, Eva

    2015-01-01

    Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped “flashes” commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These “action star” panels depict a narrative culmination (a “Peak”), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence. PMID:26709362

  17. Action starring narratives and events: Structure and inference in visual narrative comprehension.

    PubMed

    Cohn, Neil; Wittenberg, Eva

    Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped "flashes" commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These "action star" panels depict a narrative culmination (a "Peak"), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence.

  18. Visualization and Enumeration of Bacteria Carrying a Specific Gene Sequence by In Situ Rolling Circle Amplification

    PubMed Central

    Maruyama, Fumito; Kenzaka, Takehiko; Yamaguchi, Nobuyasu; Tani, Katsuji; Nasu, Masao

    2005-01-01

    Rolling circle amplification (RCA) generates large single-stranded and tandem repeats of target DNA as amplicons. This technique was applied to in situ nucleic acid amplification (in situ RCA) to visualize and count single Escherichia coli cells carrying a specific gene sequence. The method features (i) one short target sequence (35 to 39 bp) that allows specific detection; (ii) maintaining constant fluorescent intensity of positive cells permeabilized extensively after amplicon detection by fluorescence in situ hybridization, which facilitates the detection of target bacteria in various physiological states; and (iii) reliable enumeration of target bacteria by concentration on a gelatin-coated membrane filter. To test our approach, the presence of the following genes were visualized by in situ RCA: green fluorescent protein gene, the ampicillin resistance gene and the replication origin region on multicopy pUC19 plasmid, as well as the single-copy Shiga-like toxin gene on chromosomes inside E. coli cells. Fluorescent antibody staining after in situ RCA also simultaneously identified cells harboring target genes and determined the specificity of in situ RCA. E. coli cells in a nonculturable state from a prolonged incubation were periodically sampled and used for plasmid uptake study. The numbers of cells taking up plasmids determined by in situ RCA was up to 106-fold higher than that measured by selective plating. In addition, in situ RCA allowed the detection of cells taking up plasmids even when colony-forming cells were not detected during the incubation period. By optimizing the cell permeabilization condition for in situ RCA, this method can become a valuable tool for studying free DNA uptake, especially in nonculturable bacteria. PMID:16332770

  19. Preliminary results from an investigation of AIS-1 data over an area of epithermal alteration: Plateau, Northern Queensland, Australia

    NASA Technical Reports Server (NTRS)

    Mackin, Steve; Munday, Tim; Hook, Simon

    1987-01-01

    Airborne Imaging Spectrometer-1 (AIS-1) data were flown over undifferentiated sequences of acid to intermediate volcanics and intrusives; meta-sediments; and a series of partially lateritized sedimentary rocks. The area exhibits a considerable spectral variability, after the suppression of striping effects. Log residual, and Internal Average Relative Reflectance (IARR) analytical techniques were used to enhance mineralogically related spectral features. Both methods produce similar results, but did not visually highlight mineral absorption features due to processing artifacts in areas of significant vegetation cover. The enhancement of mineral related absorption features was achieved using a hybrid processing approach based on the relative reflectance differences between vegetated and non-vegetated surfaces at 1.2 and 2.1 micron. The result is an image with little overall contrast, but which enhances the more subtle spectral features believed to be associated with clays and epidote. The AIS data was subject to interactive analysis using SPAM. Clear separation of clay and epidote related absorption features was apparent, and the identification of kaolinite was possible despite detrimental spectral effects.

  20. The Advanced Glaucoma Intervention Study (AGIS): 12. Baseline risk factors for sustained loss of visual field and visual acuity in patients with advanced glaucoma.

    PubMed

    2002-10-01

    To examine the relationships between baseline risk factors and sustained decrease of visual field (SDVF) and sustained decrease of visual acuity (SDVA). Cohort study of participants in the Advanced Glaucoma Intervention Study (AGIS). This multicenter study enrolled patients between 1988 and 1992 and followed them until 2001; 789 eyes of 591 patients with advanced glaucoma were randomly assigned to one of two surgical sequences, argon laser trabeculoplasty (ALT)-trabeculectomy-trabeculectomy (ATT) or trabeculectomy-ALT-trabeculectomy (TAT). This report is based on data from 747 eyes. Eyes were offered the next intervention in the sequence upon failure of the previous intervention. Failure was based on recurrent intraocular pressure elevation, visual field defect, and disk rim criteria. Study visits occurred every 6 months; potential follow-up ranged from 8 to 13 years. For each intervention sequence, Cox multiple regression analyses were used to examine the baseline characteristics for association with two vision outcomes: SDVF and SDVA. The magnitude of the association is measured by the hazard ratio (HR), where HR for binary variables is the relative change in the hazard (or risk) of the outcome in eyes with the factor divided by the hazard in eyes without the factor, and HR for continuous variables is the relative change in the hazard (or risk) of the outcome in eyes with a unit increase in the factor. Characteristics associated with increased SDVF risk in the ATT sequence are: less baseline visual field defect (hazard ratio [HR] = 0.86, P <.001, 95% CI = 0.82-0.90), male gender (HR = 2.23, P <.001, 1.54-3.23), and worse baseline visual acuity (HR = 0.96, P =.001, 0.94-0.98); in the TAT sequence: less baseline visual field defect (HR = 0.93, P =.001, 0.89-0.97) and diabetes (HR = 1.87, P =.007, 1.18-2.97). Characteristics associated with increased SDVA risk in both treatment sequences are better baseline acuity (ATT: HR = 1.05, P <.001, 1.02-1.09; TAT: HR = 1.06, P <.001, 1.03-1.08), older age (ATT: HR = 1.05, P =.001, 1.02-1.08; TAT: HR = 1.04, P =.002, 1.01-1.06), and less formal education (ATT: HR = 1.92, P =.001, 1.29-2.88; TAT: HR = 1.77, P =.002, 1.22-2.54). For SDVF, risk factors were better baseline visual field in both treatment sequences, male gender, and worse baseline visual acuity in the ATT sequence, and diabetes in the TAT sequence. For SDVA, risk factors in both treatment sequences were better baseline visual acuity, older age, and less formal education.

  1. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  2. Manchester visual query language

    NASA Astrophysics Data System (ADS)

    Oakley, John P.; Davis, Darryl N.; Shann, Richard T.

    1993-04-01

    We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.

  3. QuickRNASeq lifts large-scale RNA-seq data analyses to the next level of automation and interactive visualization.

    PubMed

    Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong

    2016-01-08

    RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.

  4. Variant Review with the Integrative Genomics Viewer.

    PubMed

    Robinson, James T; Thorvaldsdóttir, Helga; Wenger, Aaron M; Zehir, Ahmet; Mesirov, Jill P

    2017-11-01

    Manual review of aligned reads for confirmation and interpretation of variant calls is an important step in many variant calling pipelines for next-generation sequencing (NGS) data. Visual inspection can greatly increase the confidence in calls, reduce the risk of false positives, and help characterize complex events. The Integrative Genomics Viewer (IGV) was one of the first tools to provide NGS data visualization, and it currently provides a rich set of tools for inspection, validation, and interpretation of NGS datasets, as well as other types of genomic data. Here, we present a short overview of IGV's variant review features for both single-nucleotide variants and structural variants, with examples from both cancer and germline datasets. IGV is freely available at https://www.igv.org Cancer Res; 77(21); e31-34. ©2017 AACR . ©2017 American Association for Cancer Research.

  5. Visual search, visual streams, and visual architectures.

    PubMed

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  6. Application of a brain-computer interface for person authentication using EEG responses to photo stimuli.

    PubMed

    Mu, Zhendong; Yin, Jinhai; Hu, Jianfeng

    2018-01-01

    In this paper, a person authentication system that can effectively identify individuals by generating unique electroencephalogram signal features in response to self-face and non-self-face photos is presented. In order to achieve a good stability performance, the sequence of self-face photo including first-occurrence position and non-first-occurrence position are taken into account in the serial occurrence of visual stimuli. In addition, a Fisher linear classification method and event-related potential technique for feature analysis is adapted to yield remarkably better outcomes than that by most of the existing methods in the field. The results have shown that the EEG-based person authentications via brain-computer interface can be considered as a suitable approach for biometric authentication system.

  7. A brief introduction to web-based genome browsers.

    PubMed

    Wang, Jun; Kong, Lei; Gao, Ge; Luo, Jingchu

    2013-03-01

    Genome browser provides a graphical interface for users to browse, search, retrieve and analyze genomic sequence and annotation data. Web-based genome browsers can be classified into general genome browsers with multiple species and species-specific genome browsers. In this review, we attempt to give an overview for the main functions and features of web-based genome browsers, covering data visualization, retrieval, analysis and customization. To give a brief introduction to the multiple-species genome browser, we describe the user interface and main functions of the Ensembl and UCSC genome browsers using the human alpha-globin gene cluster as an example. We further use the MSU and the Rice-Map genome browsers to show some special features of species-specific genome browser, taking a rice transcription factor gene OsSPL14 as an example.

  8. EventThread: Visual Summarization and Stage Analysis of Event Sequence Data.

    PubMed

    Guo, Shunan; Xu, Ke; Zhao, Rongwen; Gotz, David; Zha, Hongyuan; Cao, Nan

    2018-01-01

    Event sequence data such as electronic health records, a person's academic records, or car service records, are ordered series of events which have occurred over a period of time. Analyzing collections of event sequences can reveal common or semantically important sequential patterns. For example, event sequence analysis might reveal frequently used care plans for treating a disease, typical publishing patterns of professors, and the patterns of service that result in a well-maintained car. It is challenging, however, to visually explore large numbers of event sequences, or sequences with large numbers of event types. Existing methods focus on extracting explicitly matching patterns of events using statistical analysis to create stages of event progression over time. However, these methods fail to capture latent clusters of similar but not identical evolutions of event sequences. In this paper, we introduce a novel visualization system named EventThread which clusters event sequences into threads based on tensor analysis and visualizes the latent stage categories and evolution patterns by interactively grouping the threads by similarity into time-specific clusters. We demonstrate the effectiveness of EventThread through usage scenarios in three different application domains and via interviews with an expert user.

  9. Update on Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) has been updated. RSVP was reported in Rover Sequencing and Visualization Program (NPO-30845), NASA Tech Briefs, Vol. 29, No. 4 (April 2005), page 38. To recapitulate: The Rover Sequencing and Visualization Program (RSVP) is the software tool to be used in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (robotic arm) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities.

  10. Model-Based Analysis of Flow-Mediated Dilation and Intima-Media Thickness

    PubMed Central

    Bartoli, G.; Menegaz, G.; Lisi, M.; Di Stolfo, G.; Dragoni, S.; Gori, T.

    2008-01-01

    We present an end-to-end system for the automatic measurement of flow-mediated dilation (FMD) and intima-media thickness (IMT) for the assessment of the arterial function. The video sequences are acquired from a B-mode echographic scanner. A spline model (deformable template) is fitted to the data to detect the artery boundaries and track them all along the video sequence. The a priori knowledge about the image features and its content is exploited. Preprocessing is performed to improve both the visual quality of video frames for visual inspection and the performance of the segmentation algorithm without affecting the accuracy of the measurements. The system allows real-time processing as well as a high level of interactivity with the user. This is obtained by a graphical user interface (GUI) enabling the cardiologist to supervise the whole process and to eventually reset the contour extraction at any point in time. The system was validated and the accuracy, reproducibility, and repeatability of the measurements were assessed with extensive in vivo experiments. Jointly with the user friendliness, low cost, and robustness, this makes the system suitable for both research and daily clinical use. PMID:19360110

  11. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  12. Temporal Statistics of Natural Image Sequences Generated by Movements with Insect Flight Characteristics

    PubMed Central

    Schwegmann, Alexander; Lindemann, Jens Peter; Egelhaaf, Martin

    2014-01-01

    Many flying insects, such as flies, wasps and bees, pursue a saccadic flight and gaze strategy. This behavioral strategy is thought to separate the translational and rotational components of self-motion and, thereby, to reduce the computational efforts to extract information about the environment from the retinal image flow. Because of the distinguishing dynamic features of this active flight and gaze strategy of insects, the present study analyzes systematically the spatiotemporal statistics of image sequences generated during saccades and intersaccadic intervals in cluttered natural environments. We show that, in general, rotational movements with saccade-like dynamics elicit fluctuations and overall changes in brightness, contrast and spatial frequency of up to two orders of magnitude larger than translational movements at velocities that are characteristic of insects. Distinct changes in image parameters during translations are only caused by nearby objects. Image analysis based on larger patches in the visual field reveals smaller fluctuations in brightness and spatial frequency composition compared to small patches. The temporal structure and extent of these changes in image parameters define the temporal constraints imposed on signal processing performed by the insect visual system under behavioral conditions in natural environments. PMID:25340761

  13. Perceptual learning in visual search: fast, enduring, but non-specific.

    PubMed

    Sireteanu, R; Rettenbach, R

    1995-07-01

    Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.

  14. JVM: Java Visual Mapping tool for next generation sequencing read.

    PubMed

    Yang, Ye; Liu, Juan

    2015-01-01

    We developed a program JVM (Java Visual Mapping) for mapping next generation sequencing read to reference sequence. The program is implemented in Java and is designed to deal with millions of short read generated by sequence alignment using the Illumina sequencing technology. It employs seed index strategy and octal encoding operations for sequence alignments. JVM is useful for DNA-Seq, RNA-Seq when dealing with single-end resequencing. JVM is a desktop application, which supports reads capacity from 1 MB to 10 GB.

  15. CFGP: a web-based, comparative fungal genomics platform.

    PubMed

    Park, Jongsun; Park, Bongsoo; Jung, Kyongyong; Jang, Suwang; Yu, Kwangyul; Choi, Jaeyoung; Kong, Sunghyung; Park, Jaejin; Kim, Seryun; Kim, Hyojeong; Kim, Soonok; Kim, Jihyun F; Blair, Jaime E; Lee, Kwangwon; Kang, Seogchan; Lee, Yong-Hwan

    2008-01-01

    Since the completion of the Saccharomyces cerevisiae genome sequencing project in 1996, the genomes of over 80 fungal species have been sequenced or are currently being sequenced. Resulting data provide opportunities for studying and comparing fungal biology and evolution at the genome level. To support such studies, the Comparative Fungal Genomics Platform (CFGP; http://cfgp.snu.ac.kr), a web-based multifunctional informatics workbench, was developed. The CFGP comprises three layers, including the basal layer, middleware and the user interface. The data warehouse in the basal layer contains standardized genome sequences of 65 fungal species. The middleware processes queries via six analysis tools, including BLAST, ClustalW, InterProScan, SignalP 3.0, PSORT II and a newly developed tool named BLASTMatrix. The BLASTMatrix permits the identification and visualization of genes homologous to a query across multiple species. The Data-driven User Interface (DUI) of the CFGP was built on a new concept of pre-collecting data and post-executing analysis instead of the 'fill-in-the-form-and-press-SUBMIT' user interfaces utilized by most bioinformatics sites. A tool termed Favorite, which supports the management of encapsulated sequence data and provides a personalized data repository to users, is another novel feature in the DUI.

  16. Impaired visual search in rats reveals cholinergic contributions to feature binding in visuospatial attention.

    PubMed

    Botly, Leigh C P; De Rosa, Eve

    2012-10-01

    The visual search task established the feature integration theory of attention in humans and measures visuospatial attentional contributions to feature binding. We recently demonstrated that the neuromodulator acetylcholine (ACh), from the nucleus basalis magnocellularis (NBM), supports the attentional processes required for feature binding using a rat digging-based task. Additional research has demonstrated cholinergic contributions from the NBM to visuospatial attention in rats. Here, we combined these lines of evidence and employed visual search in rats to examine whether cortical cholinergic input supports visuospatial attention specifically for feature binding. We trained 18 male Long-Evans rats to perform visual search using touch screen-equipped operant chambers. Sessions comprised Feature Search (no feature binding required) and Conjunctive Search (feature binding required) trials using multiple stimulus set sizes. Following acquisition of visual search, 8 rats received bilateral NBM lesions using 192 IgG-saporin to selectively reduce cholinergic afferentation of the neocortex, which we hypothesized would selectively disrupt the visuospatial attentional processes needed for efficient conjunctive visual search. As expected, relative to sham-lesioned rats, ACh-NBM-lesioned rats took significantly longer to locate the target stimulus on Conjunctive Search, but not Feature Search trials, thus demonstrating that cholinergic contributions to visuospatial attention are important for feature binding in rats.

  17. DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.

    PubMed

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2017-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.

  18. A frontal but not parietal neural correlate of auditory consciousness.

    PubMed

    Brancucci, Alfredo; Lugli, Victor; Perrucci, Mauro Gianni; Del Gratta, Cosimo; Tommasi, Luca

    2016-01-01

    Hemodynamic correlates of consciousness were investigated in humans during the presentation of a dichotic sequence inducing illusory auditory percepts with features analogous to visual multistability. The sequence consisted of a variation of the original stimulation eliciting the Deutsch's octave illusion, created to maintain a stable illusory percept long enough to allow the detection of the underlying hemodynamic activity using functional magnetic resonance imaging (fMRI). Two specular 500 ms dichotic stimuli (400 and 800 Hz) presented in alternation by means of earphones cause an illusory segregation of pitch and ear of origin which can yield up to four different auditory percepts per dichotic stimulus. Such percepts are maintained stable when one of the two dichotic stimuli is presented repeatedly for 6 s, immediately after the alternation. We observed hemodynamic activity specifically accompanying conscious experience of pitch in a bilateral network including the superior frontal gyrus (SFG, BA9 and BA10), medial frontal gyrus (BA6 and BA9), insula (BA13), and posterior lateral nucleus of the thalamus. Conscious experience of side (ear of origin) was instead specifically accompanied by bilateral activity in the MFG (BA6), STG (BA41), parahippocampal gyrus (BA28), and insula (BA13). These results suggest that the neural substrate of auditory consciousness, differently from that of visual consciousness, may rest upon a fronto-temporal rather than upon a fronto-parietal network. Moreover, they indicate that the neural correlates of consciousness depend on the specific features of the stimulus and suggest the SFG-MFG and the insula as important cortical nodes for auditory conscious experience.

  19. Research resource: Update and extension of a glycoprotein hormone receptors web application.

    PubMed

    Kreuchwig, Annika; Kleinau, Gunnar; Kreuchwig, Franziska; Worth, Catherine L; Krause, Gerd

    2011-04-01

    The SSFA-GPHR (Sequence-Structure-Function-Analysis of Glycoprotein Hormone Receptors) database provides a comprehensive set of mutation data for the glycoprotein hormone receptors (covering the lutropin, the FSH, and the TSH receptors). Moreover, it provides a platform for comparison and investigation of these homologous receptors and helps in understanding protein malfunctions associated with several diseases. Besides extending the data set (> 1100 mutations), the database has been completely redesigned and several novel features and analysis tools have been added to the web site. These tools allow the focused extraction of semiquantitative mutant data from the GPHR subtypes and different experimental approaches. Functional and structural data of the GPHRs are now linked interactively at the web interface, and new tools for data visualization (on three-dimensional protein structures) are provided. The interpretation of functional findings is supported by receptor morphings simulating intramolecular changes during the activation process, which thus help to trace the potential function of each amino acid and provide clues to the local structural environment, including potentially relocated spatial counterpart residues. Furthermore, double and triple mutations are newly included to allow the analysis of their functional effects related to their spatial interrelationship in structures or homology models. A new important feature is the search option and data visualization by interactive and user-defined snake-plots. These new tools allow fast and easy searches for specific functional data and thereby give deeper insights in the mechanisms of hormone binding, signal transduction, and signaling regulation. The web application "Sequence-Structure-Function-Analysis of GPHRs" is accessible on the internet at http://www.ssfa-gphr.de/.

  20. Evaluation of sequence alignments and oligonucleotide probes with respect to three-dimensional structure of ribosomal RNA using ARB software package

    PubMed Central

    Kumar, Yadhu; Westram, Ralf; Kipfer, Peter; Meier, Harald; Ludwig, Wolfgang

    2006-01-01

    Background Availability of high-resolution RNA crystal structures for the 30S and 50S ribosomal subunits and the subsequent validation of comparative secondary structure models have prompted the biologists to use three-dimensional structure of ribosomal RNA (rRNA) for evaluating sequence alignments of rRNA genes. Furthermore, the secondary and tertiary structural features of rRNA are highly useful and successfully employed in designing rRNA targeted oligonucleotide probes intended for in situ hybridization experiments. RNA3D, a program to combine sequence alignment information with three-dimensional structure of rRNA was developed. Integration into ARB software package, which is used extensively by the scientific community for phylogenetic analysis and molecular probe designing, has substantially extended the functionality of ARB software suite with 3D environment. Results Three-dimensional structure of rRNA is visualized in OpenGL 3D environment with the abilities to change the display and overlay information onto the molecule, dynamically. Phylogenetic information derived from the multiple sequence alignments can be overlaid onto the molecule structure in a real time. Superimposition of both statistical and non-statistical sequence associated information onto the rRNA 3D structure can be done using customizable color scheme, which is also applied to a textual sequence alignment for reference. Oligonucleotide probes designed by ARB probe design tools can be mapped onto the 3D structure along with the probe accessibility models for evaluation with respect to secondary and tertiary structural conformations of rRNA. Conclusion Visualization of three-dimensional structure of rRNA in an intuitive display provides the biologists with the greater possibilities to carry out structure based phylogenetic analysis. Coupled with secondary structure models of rRNA, RNA3D program aids in validating the sequence alignments of rRNA genes and evaluating probe target sites. Superimposition of the information derived from the multiple sequence alignment onto the molecule dynamically allows the researchers to observe any sequence inherited characteristics (phylogenetic information) in real-time environment. The extended ARB software package is made freely available for the scientific community via . PMID:16672074

  1. Visual mental image generation does not overlap with visual short-term memory: a dual-task interference study.

    PubMed

    Borst, Gregoire; Niven, Elaine; Logie, Robert H

    2012-04-01

    Visual mental imagery and working memory are often assumed to play similar roles in high-order functions, but little is known of their functional relationship. In this study, we investigated whether similar cognitive processes are involved in the generation of visual mental images, in short-term retention of those mental images, and in short-term retention of visual information. Participants encoded and recalled visually or aurally presented sequences of letters under two interference conditions: spatial tapping or irrelevant visual input (IVI). In Experiment 1, spatial tapping selectively interfered with the retention of sequences of letters when participants generated visual mental images from aural presentation of the letter names and when the letters were presented visually. In Experiment 2, encoding of the sequences was disrupted by both interference tasks. However, in Experiment 3, IVI interfered with the generation of the mental images, but not with their retention, whereas spatial tapping was more disruptive during retention than during encoding. Results suggest that the temporary retention of visual mental images and of visual information may be supported by the same visual short-term memory store but that this store is not involved in image generation.

  2. Can responses to basic non-numerical visual features explain neural numerosity responses?

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2017-04-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Brain functional network connectivity based on a visual task: visual information processing-related brain regions are significantly activated in the task state.

    PubMed

    Yang, Yan-Li; Deng, Hong-Xia; Xing, Gui-Yang; Xia, Xiao-Luan; Li, Hai-Fang

    2015-02-01

    It is not clear whether the method used in functional brain-network related research can be applied to explore the feature binding mechanism of visual perception. In this study, we investigated feature binding of color and shape in visual perception. Functional magnetic resonance imaging data were collected from 38 healthy volunteers at rest and while performing a visual perception task to construct brain networks active during resting and task states. Results showed that brain regions involved in visual information processing were obviously activated during the task. The components were partitioned using a greedy algorithm, indicating the visual network existed during the resting state. Z-values in the vision-related brain regions were calculated, confirming the dynamic balance of the brain network. Connectivity between brain regions was determined, and the result showed that occipital and lingual gyri were stable brain regions in the visual system network, the parietal lobe played a very important role in the binding process of color features and shape features, and the fusiform and inferior temporal gyri were crucial for processing color and shape information. Experimental findings indicate that understanding visual feature binding and cognitive processes will help establish computational models of vision, improve image recognition technology, and provide a new theoretical mechanism for feature binding in visual perception.

  4. Direct evidence for attention-dependent influences of the frontal eye-fields on feature-responsive visual cortex.

    PubMed

    Heinen, Klaartje; Feredoes, Eva; Weiskopf, Nikolaus; Ruff, Christian C; Driver, Jon

    2014-11-01

    Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property. © The Author 2013. Published by Oxford University Press.

  5. Cross-Modal Retrieval With CNN Visual Features: A New Baseline.

    PubMed

    Wei, Yunchao; Zhao, Yao; Lu, Canyi; Wei, Shikui; Liu, Luoqi; Zhu, Zhenfeng; Yan, Shuicheng

    2017-02-01

    Recently, convolutional neural network (CNN) visual features have demonstrated their powerful ability as a universal representation for various recognition tasks. In this paper, cross-modal retrieval with CNN visual features is implemented with several classic methods. Specifically, off-the-shelf CNN visual features are extracted from the CNN model, which is pretrained on ImageNet with more than one million images from 1000 object categories, as a generic image representation to tackle cross-modal retrieval. To further enhance the representational ability of CNN visual features, based on the pretrained CNN model on ImageNet, a fine-tuning step is performed by using the open source Caffe CNN library for each target data set. Besides, we propose a deep semantic matching method to address the cross-modal retrieval problem with respect to samples which are annotated with one or multiple labels. Extensive experiments on five popular publicly available data sets well demonstrate the superiority of CNN visual features for cross-modal retrieval.

  6. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

    PubMed Central

    Bai, Xiangzhi

    2015-01-01

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229

  7. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.

    PubMed

    Bai, Xiangzhi

    2015-07-15

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.

  8. Robot Acting on Moving Bodies (RAMBO): Interaction with tumbling objects

    NASA Technical Reports Server (NTRS)

    Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madhu; Harwood, David

    1989-01-01

    Interaction with tumbling objects will become more common as human activities in space expand. Attempting to interact with a large complex object translating and rotating in space, a human operator using only his visual and mental capacities may not be able to estimate the object motion, plan actions or control those actions. A robot system (RAMBO) equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a tumbling object, is being developed. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations rearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enhancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using dynamic interpolations between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.

  9. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  10. Visual search for feature and conjunction targets with an attention deficit.

    PubMed

    Arguin, M; Joanette, Y; Cavanagh, P

    1993-01-01

    Abstract Brain-damaged subjects who had previously been identified as suffering from a visual attention deficit for contralesional stimulation were tested on a series of visual search tasks. The experiments examined the hypothesis that the processing of single features is preattentive but that feature integration, necessary for the correct perception of conjunctions of features, requires attention (Treisman & Gelade, 1980 Treisman & Sato, 1990). Subjects searched for a feature target (orientation or color) or for a conjunction target (orientation and color) in unilateral displays in which the number of items presented was variable. Ocular fixation was controlled so that trials on which eye movements occurred were cancelled. While brain-damaged subjects with a visual attention disorder (VAD subjects) performed similarly to normal controls in feature search tasks, they showed a marked deficit in conjunction search. Specifically, VAD subjects exhibited an important reduction of their serial search rates for a conjunction target with contralesional displays. In support of Treisman's feature integration theory, a visual attention deficit leads to a marked impairment in feature integration whereas it does not appear to affect feature encoding.

  11. Dynamic binding of visual features by neuronal/stimulus synchrony.

    PubMed

    Iwabuchi, A

    1998-05-01

    When people see a visual scene, certain parts of the visual scene are treated as belonging together and we regard them as a perceptual unit, which is called a "figure". People focus on figures, and the remaining parts of the scene are disregarded as "ground". In Gestalt psychology this process is called "figure-ground segregation". According to current perceptual psychology, a figure is formed by binding various visual features in a scene, and developments in neuroscience have revealed that there are many feature-encoding neurons, which respond to such features specifically. It is not known, however, how the brain binds different features of an object into a coherent visual object representation. Recently, the theory of binding by neuronal synchrony, which argues that feature binding is dynamically mediated by neuronal synchrony of feature-encoding neurons, has been proposed. This review article portrays the problem of figure-ground segregation and features binding, summarizes neurophysiological and psychophysical experiments and theory relevant to feature binding by neuronal/stimulus synchrony, and suggests possible directions for future research on this topic.

  12. Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm.

    PubMed

    Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin

    2017-07-05

    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  14. Evidence for Two Attentional Components in Visual Working Memory

    ERIC Educational Resources Information Center

    Allen, Richard J.; Baddeley, Alan D.; Hitch, Graham J.

    2014-01-01

    How does executive attentional control contribute to memory for sequences of visual objects, and what does this reveal about storage and processing in working memory? Three experiments examined the impact of a concurrent executive load (backward counting) on memory for sequences of individually presented visual objects. Experiments 1 and 2 found…

  15. Feature Masking in Computer Game Promotes Visual Imagery

    ERIC Educational Resources Information Center

    Smith, Glenn Gordon; Morey, Jim; Tjoe, Edwin

    2007-01-01

    Can learning of mental imagery skills for visualizing shapes be accelerated with feature masking? Chemistry, physics fine arts, military tactics, and laparoscopic surgery often depend on mentally visualizing shapes in their absence. Does working with "spatial feature-masks" (skeletal shapes, missing key identifying portions) encourage people to…

  16. Computer-aided Classification of Mammographic Masses Using Visually Sensitive Image Features

    PubMed Central

    Wang, Yunzhi; Aghaei, Faranak; Zarafshani, Ali; Qiu, Yuchen; Qian, Wei; Zheng, Bin

    2017-01-01

    Purpose To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. Methods An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. Results Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. Conclusion This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a “visual aid” interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features. PMID:27911353

  17. Synchronization of DNA array replication kinetics

    NASA Astrophysics Data System (ADS)

    Manturov, Alexey O.; Grigoryev, Anton V.

    2016-04-01

    In the present work we discuss the features of the DNA replication kinetics at the case of multiplicity of simultaneously elongated DNA fragments. The interaction between replicated DNA fragments is carried out by free protons that appears at the every nucleotide attachment at the free end of elongated DNA fragment. So there is feedback between free protons concentration and DNA-polymerase activity that appears as elongation rate dependence. We develop the numerical model based on a cellular automaton, which can simulate the elongation stage (growth of DNA strands) for DNA elongation process with conditions pointed above and we study the possibility of the DNA polymerases movement synchronization. The results obtained numerically can be useful for DNA polymerase movement detection and visualization of the elongation process in the case of massive DNA replication, eg, under PCR condition or for DNA "sequencing by synthesis" sequencing devices evaluation.

  18. How Configural Is the Configural Superiority Effect? A Neuroimaging Investigation of Emergent Features in Visual Cortex

    PubMed Central

    Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.

    2017-01-01

    The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924

  19. Robot Sequencing and Visualization Program (RSVP)

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C

    2013-01-01

    The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.

  20. Persistence in eye movement during visual search

    NASA Astrophysics Data System (ADS)

    Amor, Tatiana A.; Reis, Saulo D. S.; Campos, Daniel; Herrmann, Hans J.; Andrade, José S.

    2016-02-01

    As any cognitive task, visual search involves a number of underlying processes that cannot be directly observed and measured. In this way, the movement of the eyes certainly represents the most explicit and closest connection we can get to the inner mechanisms governing this cognitive activity. Here we show that the process of eye movement during visual search, consisting of sequences of fixations intercalated by saccades, exhibits distinctive persistent behaviors. Initially, by focusing on saccadic directions and intersaccadic angles, we disclose that the probability distributions of these measures show a clear preference of participants towards a reading-like mechanism (geometrical persistence), whose features and potential advantages for searching/foraging are discussed. We then perform a Multifractal Detrended Fluctuation Analysis (MF-DFA) over the time series of jump magnitudes in the eye trajectory and find that it exhibits a typical multifractal behavior arising from the sequential combination of saccades and fixations. By inspecting the time series composed of only fixational movements, our results reveal instead a monofractal behavior with a Hurst exponent , which indicates the presence of long-range power-law positive correlations (statistical persistence). We expect that our methodological approach can be adopted as a way to understand persistence and strategy-planning during visual search.

  1. Arrestin gene mutations in autosomal recessive retinitis pigmentosa.

    PubMed

    Nakazawa, M; Wada, Y; Tamai, M

    1998-04-01

    To assess the clinical and molecular genetic studies of patients with autosomal recessive retinitis pigmentosa associated with a mutation in the arrestin gene. Results of molecular genetic screening and case reports with DNA analysis and clinical features. University medical center. One hundred twenty anamnestically unrelated patients with autosomal recessive retinitis pigmentosa. DNA analysis was performed by single strand conformation polymorphism followed by nucleotide sequencing to search for a mutation in exon 11 of the arrestin gene. Clinical features were characterized by visual acuity slitlamp biomicroscopy, fundus examinations, fluorescein angiography, kinetic visual field testing, and electroretinography. We identified 3 unrelated patients with retinitis pigmentosa associated with a homozygous 1-base-pair deletion mutation in codon 309 of the arrestin gene designated as 1147delA. All 3 patients showed pigmentary retinal degeneration in the midperipheral area with or without macular involvement. Patient 1 had a sibling with Oguchi disease associated with the same mutation. Patient 2 demonstrated pigmentary retinal degeneration associated with a golden-yellow reflex in the peripheral fundus. Patients 1 and 3 showed features of retinitis pigmentosa without the golden-yellow fundus reflex. Although the arrestin 1147delA has been known as a frequent cause of Oguchi disease, this mutation also may be related to the pathogenesis of autosomal recessive retinitis pigmentosa. This phenomenon may provide evidence of variable expressivity of the mutation in the arrestin gene.

  2. Internal attention to features in visual short-term memory guides object learning

    PubMed Central

    Fan, Judith E.; Turk-Browne, Nicholas B.

    2013-01-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. PMID:23954925

  3. Internal attention to features in visual short-term memory guides object learning.

    PubMed

    Fan, Judith E; Turk-Browne, Nicholas B

    2013-11-01

    Attending to objects in the world affects how we perceive and remember them. What are the consequences of attending to an object in mind? In particular, how does reporting the features of a recently seen object guide visual learning? In three experiments, observers were presented with abstract shapes in a particular color, orientation, and location. After viewing each object, observers were cued to report one feature from visual short-term memory (VSTM). In a subsequent test, observers were cued to report features of the same objects from visual long-term memory (VLTM). We tested whether reporting a feature from VSTM: (1) enhances VLTM for just that feature (practice-benefit hypothesis), (2) enhances VLTM for all features (object-based hypothesis), or (3) simultaneously enhances VLTM for that feature and suppresses VLTM for unreported features (feature-competition hypothesis). The results provided support for the feature-competition hypothesis, whereby the representation of an object in VLTM was biased towards features reported from VSTM and away from unreported features (Experiment 1). This bias could not be explained by the amount of sensory exposure or response learning (Experiment 2) and was amplified by the reporting of multiple features (Experiment 3). Taken together, these results suggest that selective internal attention induces competitive dynamics among features during visual learning, flexibly tuning object representations to align with prior mnemonic goals. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  5. Foldit Standalone: a video game-derived protein structure manipulation interface using Rosetta.

    PubMed

    Kleffner, Robert; Flatten, Jeff; Leaver-Fay, Andrew; Baker, David; Siegel, Justin B; Khatib, Firas; Cooper, Seth

    2017-09-01

    Foldit Standalone is an interactive graphical interface to the Rosetta molecular modeling package. In contrast to most command-line or batch interactions with Rosetta, Foldit Standalone is designed to allow easy, real-time, direct manipulation of protein structures, while also giving access to the extensive power of Rosetta computations. Derived from the user interface of the scientific discovery game Foldit (itself based on Rosetta), Foldit Standalone has added more advanced features and removed the competitive game elements. Foldit Standalone was built from the ground up with a custom rendering and event engine, configurable visualizations and interactions driven by Rosetta. Foldit Standalone contains, among other features: electron density and contact map visualizations, multiple sequence alignment tools for template-based modeling, rigid body transformation controls, RosettaScripts support and an embedded Lua interpreter. Foldit Standalone is available for download at https://fold.it/standalone , under the Rosetta license, which is free for academic and non-profit users. It is implemented in cross-platform C ++ and binary executables are available for Windows, macOS and Linux. scooper@ccs.neu.edu. © The Author(s) 2017. Published by Oxford University Press.

  6. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  7. Visual Prediction in Infancy: What Is the Association with Later Vocabulary?

    ERIC Educational Resources Information Center

    Ellis, Erica M.; Gonzalez, Marybel Robledo; Deák, Gedeon O.

    2014-01-01

    Young infants can learn statistical regularities and patterns in sequences of events. Studies have demonstrated a relationship between early sequence learning skills and later development of cognitive and language skills. We investigated the relation between infants' visual response speed to novel event sequences, and their later receptive and…

  8. Dynamic Assessment of Microbial Ecology (DAME): A web app for interactive analysis and visualization of microbial sequencing data

    USDA-ARS?s Scientific Manuscript database

    Dynamic Assessment of Microbial Ecology (DAME) is a shiny-based web application for interactive analysis and visualization of microbial sequencing data. DAME provides researchers not familiar with R programming the ability to access the most current R functions utilized for ecology and gene sequenci...

  9. Visual Sequence Learning in Infancy: Domain-General and Domain-Specific Associations with Language

    ERIC Educational Resources Information Center

    Shafto, Carissa L.; Conway, Christopher M.; Field, Suzanne L.; Houston, Derek M.

    2012-01-01

    Research suggests that nonlinguistic sequence learning abilities are an important contributor to language development (Conway, Bauernschmidt, Huang, & Pisoni, 2010). The current study investigated visual sequence learning (VSL) as a possible predictor of vocabulary development in infants. Fifty-eight 8.5-month-old infants were presented with a…

  10. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  11. CBrowse: a SAM/BAM-based contig browser for transcriptome assembly visualization and analysis.

    PubMed

    Li, Pei; Ji, Guoli; Dong, Min; Schmidt, Emily; Lenox, Douglas; Chen, Liangliang; Liu, Qi; Liu, Lin; Zhang, Jie; Liang, Chun

    2012-09-15

    To address the impending need for exploring rapidly increased transcriptomics data generated for non-model organisms, we developed CBrowse, an AJAX-based web browser for visualizing and analyzing transcriptome assemblies and contigs. Designed in a standard three-tier architecture with a data pre-processing pipeline, CBrowse is essentially a Rich Internet Application that offers many seamlessly integrated web interfaces and allows users to navigate, sort, filter, search and visualize data smoothly. The pre-processing pipeline takes the contig sequence file in FASTA format and its relevant SAM/BAM file as the input; detects putative polymorphisms, simple sequence repeats and sequencing errors in contigs and generates image, JSON and database-compatible CSV text files that are directly utilized by different web interfaces. CBowse is a generic visualization and analysis tool that facilitates close examination of assembly quality, genetic polymorphisms, sequence repeats and/or sequencing errors in transcriptome sequencing projects. CBrowse is distributed under the GNU General Public License, available at http://bioinfolab.muohio.edu/CBrowse/ liangc@muohio.edu or liangc.mu@gmail.com; glji@xmu.edu.cn Supplementary data are available at Bioinformatics online.

  12. Effects of Spatial and Feature Attention on Disparity-Rendered Structure-From-Motion Stimuli in the Human Visual Cortex

    PubMed Central

    Ip, Ifan Betina; Bridge, Holly; Parker, Andrew J.

    2014-01-01

    An important advance in the study of visual attention has been the identification of a non-spatial component of attention that enhances the response to similar features or objects across the visual field. Here we test whether this non-spatial component can co-select individual features that are perceptually bound into a coherent object. We combined human psychophysics and functional magnetic resonance imaging (fMRI) to demonstrate the ability to co-select individual features from perceptually coherent objects. Our study used binocular disparity and visual motion to define disparity structure-from-motion (dSFM) stimuli. Although the spatial attention system induced strong modulations of the fMRI response in visual regions, the non-spatial system’s ability to co-select features of the dSFM stimulus was less pronounced and variable across subjects. Our results demonstrate that feature and global feature attention effects are variable across participants, suggesting that the feature attention system may be limited in its ability to automatically select features within the attended object. Careful comparison of the task design suggests that even minor differences in the perceptual task may be critical in revealing the presence of global feature attention. PMID:24936974

  13. A Novel Cylindrical Representation for Characterizing Intrinsic Properties of Protein Sequences.

    PubMed

    Yu, Jia-Feng; Dou, Xiang-Hua; Wang, Hong-Bo; Sun, Xiao; Zhao, Hui-Ying; Wang, Ji-Hua

    2015-06-22

    The composition and sequence order of amino acid residues are the two most important characteristics to describe a protein sequence. Graphical representations facilitate visualization of biological sequences and produce biologically useful numerical descriptors. In this paper, we propose a novel cylindrical representation by placing the 20 amino acid residue types in a circle and sequence positions along the z axis. This representation allows visualization of the composition and sequence order of amino acids at the same time. Ten numerical descriptors and one weighted numerical descriptor have been developed to quantitatively describe intrinsic properties of protein sequences on the basis of the cylindrical model. Their applications to similarity/dissimilarity analysis of nine ND5 proteins indicated that these numerical descriptors are more effective than several classical numerical matrices. Thus, the cylindrical representation obtained here provides a new useful tool for visualizing and charactering protein sequences. An online server is available at http://biophy.dzu.edu.cn:8080/CNumD/input.jsp .

  14. Chronodes: Interactive Multifocus Exploration of Event Sequences

    PubMed Central

    POLACK, PETER J.; CHEN, SHANG-TSE; KAHNG, MINSUK; DE BARBARO, KAYA; BASOLE, RAHUL; SHARMIN, MOUSHUMI; CHAU, DUEN HORNG

    2018-01-01

    The advent of mobile health (mHealth) technologies challenges the capabilities of current visualizations, interactive tools, and algorithms. We present Chronodes, an interactive system that unifies data mining and human-centric visualization techniques to support explorative analysis of longitudinal mHealth data. Chronodes extracts and visualizes frequent event sequences that reveal chronological patterns across multiple participant timelines of mHealth data. It then combines novel interaction and visualization techniques to enable multifocus event sequence analysis, which allows health researchers to interactively define, explore, and compare groups of participant behaviors using event sequence combinations. Through summarizing insights gained from a pilot study with 20 behavioral and biomedical health experts, we discuss Chronodes’s efficacy and potential impact in the mHealth domain. Ultimately, we outline important open challenges in mHealth, and offer recommendations and design guidelines for future research. PMID:29515937

  15. Background instrumental music and serial recall.

    PubMed

    Nittono, H

    1997-06-01

    Although speech and vocal music are consistently shown to impair serial recall for visually presented items, instrumental music does not always produce a significant disruption. This study investigated the features of instrumental music that would modulate the disruption in serial recall. 24 students were presented sequences of nine digits and required to recall the digits in order of presentation. Instrumental music as played either forward or backward during the task. Forward music caused significantly more disruption than did silence, whereas the reversed music did not. Some higher-order factor may be at work in the effect of background music on serial recall.

  16. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  17. Timing of Visual Bodily Behavior in Repair Sequences: Evidence from Three Languages

    ERIC Educational Resources Information Center

    Floyd, Simeon; Manrique, Elizabeth; Rossi, Giovanni; Torreira, Francisco

    2016-01-01

    This article expands the study of other-initiated repair in conversation--when one party signals a problem with producing or perceiving another's turn at talk--into the domain of visual bodily behavior. It presents one primary cross-linguistic finding about the timing of visual bodily behavior in repair sequences: if the party who initiates repair…

  18. Linear and Non-Linear Visual Feature Learning in Rat and Humans

    PubMed Central

    Bossens, Christophe; Op de Beeck, Hans P.

    2016-01-01

    The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201

  19. Immersive visualization for navigation and control of the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Hartman, Frank R.; Cooper, Brian; Maxwell, Scott; Wright, John; Yen, Jeng

    2004-01-01

    The Rover Sequencing and Visualization Program (RSVP) is a suite of tools for sequencing of planetary rovers, which are subject to significant light time delay and thus are unsuitable for teleoperation.

  20. Simultaneous bright‐ and black‐blood whole‐heart MRI for noncontrast enhanced coronary lumen and thrombus visualization

    PubMed Central

    Neji, Radhouene; Phinikaridou, Alkystis; Whitaker, John; Botnar, René M.; Prieto, Claudia

    2017-01-01

    Purpose To develop a 3D whole‐heart Bright‐blood and black‐blOOd phase SensiTive (BOOST) inversion recovery sequence for simultaneous noncontrast enhanced coronary lumen and thrombus/hemorrhage visualization. Methods The proposed sequence alternates the acquisition of two bright‐blood datasets preceded by different preparatory pulses to obtain variations in blood/myocardium contrast, which then are combined in a phase‐sensitive inversion recovery (PSIR)‐like reconstruction to obtain a third, coregistered, black‐blood dataset. The bright‐blood datasets are used for both visualization of the coronary lumen and motion estimation, whereas the complementary black‐blood dataset potentially allows for thrombus/hemorrhage visualization. Furthermore, integration with 2D image‐based navigation enables 100% scan efficiency and predictable scan times. The proposed sequence was compared to conventional coronary MR angiography (CMRA) and PSIR sequences in a standardized phantom and in healthy subjects. Feasibility for thrombus depiction was tested ex vivo. Results With BOOST, the coronary lumen is visualized with significantly higher (P < 0.05) contrast‐to‐noise ratio and vessel sharpness when compared to conventional CMRA. Furthermore, BOOST showed effective blood signal suppression as well as feasibility for thrombus visualization ex vivo. Conclusion A new PSIR sequence for noncontrast enhanced simultaneous coronary lumen and thrombus/hemorrhage detection was developed. The sequence provided improved coronary lumen depiction and showed potential for thrombus visualization. Magn Reson Med 79:1460–1472, 2018. © 2017 International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28722267

  1. Stimulus information contaminates summation tests of independent neural representations of features

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2002-01-01

    Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.

  2. The development of organized visual search

    PubMed Central

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  3. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    PubMed

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Between-object and within-object saccade programming in a visual search task.

    PubMed

    Vergilino-Perez, Dorine; Findlay, John M

    2006-07-01

    The role of the perceptual organization of the visual display on eye movement control was examined in two experiments using a task where a two-saccade sequence was directed toward either a single elongated object or three separate shorter objects. In the first experiment, we examined the consequences for the second saccade of a small displacement of the whole display during the first saccade. We found that between-object saccades compensated for the displacement to aim for a target position on the new object whereas within-object saccades did not show compensation but were coded as a fixed motor vector applied irrespective of wherever the preceding saccade landed. In the second experiment, we extended the paradigm to examine saccades performed in different directions. The results suggest that the within-object and between-object saccade distinction is an essential feature of saccadic planning.

  5. Simultaneous high-speed schlieren and OH chemiluminescence imaging in a hybrid rocket combustor at elevated pressures

    NASA Astrophysics Data System (ADS)

    Miller, Victor; Jens, Elizabeth T.; Mechentel, Flora S.; Cantwell, Brian J.; Stanford Propulsion; Space Exploration Group Team

    2014-11-01

    In this work, we present observations of the overall features and dynamics of flow and combustion in a slab-type hybrid rocket combustor. Tests were conducted in the recently upgraded Stanford Combustion Visualization Facility, a hybrid rocket combustor test platform capable of generating constant mass-flux flows of oxygen. High-speed (3 kHz) schlieren and OH chemiluminescence imaging were used to visualize the flow. We present imaging results for the combustion of two different fuel grains, a classic, low regression rate polymethyl methacrylate (PMMA), and a high regression rate paraffin, and all tests were conducted in gaseous oxygen. Each fuel grain was tested at multiple free-stream pressures at constant oxidizer mass flux (40 kg/m2s). The resulting image sequences suggest that aspects of the dynamics and scaling of the system depend strongly on both pressure and type of fuel.

  6. Visual short-term memory always requires general attention.

    PubMed

    Morey, Candice C; Bieler, Malte

    2013-02-01

    The role of attention in visual memory remains controversial; while some evidence has suggested that memory for binding between features demands no more attention than does memory for the same features, other evidence has indicated cognitive costs or mnemonic benefits for explicitly attending to bindings. We attempted to reconcile these findings by examining how memory for binding, for features, and for features during binding is affected by a concurrent attention-demanding task. We demonstrated that performing a concurrent task impairs memory for as few as two visual objects, regardless of whether each object includes one or more features. We argue that this pattern of results reflects an essential role for domain-general attention in visual memory, regardless of the simplicity of the to-be-remembered stimuli. We then discuss the implications of these findings for theories of visual working memory.

  7. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    PubMed

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  8. Development of a Large Field of View Shadowgraph System for a 16 Ft. Transonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Talley, Michael A.; Jones, Stephen B.; Goodman, Wesley L.

    2000-01-01

    A large field of view shadowgraph flow visualization system for the Langley 16 ft. Transonic Tunnel (16 ft.TT) has been developed to provide fast, low cost, aerodynamic design concept evaluation capability to support the development of the next generation of commercial and military aircraft and space launch vehicles. Key features of the 16 ft. TT shadowgraph system are: (1) high resolution (1280 X 1024) digital snap shots and sequences; (2) video recording of shadowgraph at 30 frames per second; (3) pan, tilt, & zoom to find and observe flow features; (4) one microsecond flash for freeze frame images; (5) large field of view approximately 12 X 6 ft; and (6) a low maintenance, high signal/noise ratio, retro-reflective screen to allow shadowgraph imaging while test section lights are on.

  9. Scale-adaptive compressive tracking with feature integration

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin

    2016-05-01

    Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.

  10. Towards accurate localization: long- and short-term correlation filters for tracking

    NASA Astrophysics Data System (ADS)

    Li, Minglangjun; Tian, Chunna

    2018-04-01

    Visual tracking is a challenging problem, especially using a single model. In this paper, we propose a discriminative correlation filter (DCF) based tracking approach that exploits both the long-term and short-term information of the target, named LSTDCF, to improve the tracking performance. In addition to a long-term filter learned through the whole sequence, a short-term filter is trained using only features extracted from most recent frames. The long-term filter tends to capture more semantics of the target as more frames are used for training. However, since the target may undergo large appearance changes, features extracted around the target in non-recent frames prevent the long-term filter from locating the target in the current frame accurately. In contrast, the short-term filter learns more spatial details of the target from recent frames but gets over-fitting easily. Thus the short-term filter is less robust to handle cluttered background and prone to drift. We take the advantage of both filters and fuse their response maps to make the final estimation. We evaluate our approach on a widely-used benchmark with 100 image sequences and achieve state-of-the-art results.

  11. CEP78 is mutated in a distinct type of Usher syndrome.

    PubMed

    Fu, Qing; Xu, Mingchu; Chen, Xue; Sheng, Xunlun; Yuan, Zhisheng; Liu, Yani; Li, Huajin; Sun, Zixi; Li, Huiping; Yang, Lizhu; Wang, Keqing; Zhang, Fangxia; Li, Yumei; Zhao, Chen; Sui, Ruifang; Chen, Rui

    2017-03-01

    Usher syndrome is a genetically heterogeneous disorder featured by combined visual impairment and hearing loss. Despite a dozen of genes involved in Usher syndrome having been identified, the genetic basis remains unknown in 20-30% of patients. In this study, we aimed to identify the novel disease-causing gene of a distinct subtype of Usher syndrome. Ophthalmic examinations and hearing tests were performed on patients with Usher syndrome in two consanguineous families. Target capture sequencing was initially performed to screen causative mutations in known retinal disease-causing loci. Whole exome sequencing (WES) and whole genome sequencing (WGS) were applied for identifying novel disease-causing genes. RT-PCR and Sanger sequencing were performed to evaluate the splicing-altering effect of identified CEP78 variants. Patients from the two independent families show a mild Usher syndrome phenotype featured by juvenile or adult-onset cone-rod dystrophy and sensorineural hearing loss. WES and WGS identified two homozygous rare variants that affect mRNA splicing of a ciliary gene CEP78 . RT-PCR confirmed that the two variants indeed lead to abnormal splicing, resulting in premature stop of protein translation due to frameshift. Our results provide evidence that CEP78 is a novel disease-causing gene for Usher syndrome, demonstrating an additional link between ciliopathy and Usher protein network in photoreceptor cells and inner ear hair cells. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Visual spatial attention tracking using high-density SSVEP data for independent brain-computer communication.

    PubMed

    Kelly, Simon P; Lalor, Edmund C; Reilly, Richard B; Foxe, John J

    2005-06-01

    The steady-state visual evoked potential (SSVEP) has been employed successfully in brain-computer interface (BCI) research, but its use in a design entirely independent of eye movement has until recently not been reported. This paper presents strong evidence suggesting that the SSVEP can be used as an electrophysiological correlate of visual spatial attention that may be harnessed on its own or in conjunction with other correlates to achieve control in an independent BCI. In this study, 64-channel electroencephalography data were recorded from subjects who covertly attended to one of two bilateral flicker stimuli with superimposed letter sequences. Offline classification of left/right spatial attention was attempted by extracting SSVEPs at optimal channels selected for each subject on the basis of the scalp distribution of SSVEP magnitudes. This yielded an average accuracy of approximately 71% across ten subjects (highest 86%) comparable across two separate cases in which flicker frequencies were set within and outside the alpha range respectively. Further, combining SSVEP features with attention-dependent parieto-occipital alpha band modulations resulted in an average accuracy of 79% (highest 87%).

  13. Dynamic analysis and pattern visualization of forest fires.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns.

  14. Dynamic Analysis and Pattern Visualization of Forest Fires

    PubMed Central

    Lopes, António M.; Tenreiro Machado, J. A.

    2014-01-01

    This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns. PMID:25137393

  15. Visual ModuleOrganizer: a graphical interface for the detection and comparative analysis of repeat DNA modules

    PubMed Central

    2014-01-01

    Background DNA repeats, such as transposable elements, minisatellites and palindromic sequences, are abundant in sequences and have been shown to have significant and functional roles in the evolution of the host genomes. In a previous study, we introduced the concept of a repeat DNA module, a flexible motif present in at least two occurences in the sequences. This concept was embedded into ModuleOrganizer, a tool allowing the detection of repeat modules in a set of sequences. However, its implementation remains difficult for larger sequences. Results Here we present Visual ModuleOrganizer, a Java graphical interface that enables a new and optimized version of the ModuleOrganizer tool. To implement this version, it was recoded in C++ with compressed suffix tree data structures. This leads to less memory usage (at least 120-fold decrease in average) and decreases by at least four the computation time during the module detection process in large sequences. Visual ModuleOrganizer interface allows users to easily choose ModuleOrganizer parameters and to graphically display the results. Moreover, Visual ModuleOrganizer dynamically handles graphical results through four main parameters: gene annotations, overlapping modules with known annotations, location of the module in a minimal number of sequences, and the minimal length of the modules. As a case study, the analysis of FoldBack4 sequences clearly demonstrated that our tools can be extended to comparative and evolutionary analyses of any repeat sequence elements in a set of genomic sequences. With the increasing number of sequences available in public databases, it is now possible to perform comparative analyses of repeated DNA modules in a graphic and friendly manner within a reasonable time period. Availability Visual ModuleOrganizer interface and the new version of the ModuleOrganizer tool are freely available at: http://lcb.cnrs-mrs.fr/spip.php?rubrique313. PMID:24678954

  16. Visual feature extraction and establishment of visual tags in the intelligent visual internet of things

    NASA Astrophysics Data System (ADS)

    Zhao, Yiqun; Wang, Zhihui

    2015-12-01

    The Internet of things (IOT) is a kind of intelligent networks which can be used to locate, track, identify and supervise people and objects. One of important core technologies of intelligent visual internet of things ( IVIOT) is the intelligent visual tag system. In this paper, a research is done into visual feature extraction and establishment of visual tags of the human face based on ORL face database. Firstly, we use the principal component analysis (PCA) algorithm for face feature extraction, then adopt the support vector machine (SVM) for classifying and face recognition, finally establish a visual tag for face which is already classified. We conducted a experiment focused on a group of people face images, the result show that the proposed algorithm have good performance, and can show the visual tag of objects conveniently.

  17. Deep Motif Dashboard: Visualizing and Understanding Genomic Sequences Using Deep Neural Networks

    PubMed Central

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2018-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence’s saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them. PMID:27896980

  18. The Roles of Visualization and Symbolism in the Potential and Actual Infinity of the Limit Process

    ERIC Educational Resources Information Center

    Kidron, Ivy; Tall, David

    2015-01-01

    A teaching experiment-using Mathematica to investigate the convergence of sequence of functions visually as a sequence of objects (graphs) converging onto a fixed object (the graph of the limit function)-is here used to analyze how the approach can support the dynamic blending of visual and symbolic representations that has the potential to lead…

  19. Bouncing Ball with a Uniformly Varying Velocity in a Metronome Synchronization Task.

    PubMed

    Huang, Yingyu; Gu, Li; Yang, Junkai; Wu, Xiang

    2017-09-21

    Sensorimotor synchronization (SMS), a fundamental human ability to coordinate movements with external rhythms, has long been thought to be modality specific. In the canonical metronome synchronization task that requires tapping a finger along with an isochronous sequence, a well-established finding is that synchronization is much more stable to an auditory sequence consisting of auditory tones than to a visual sequence consisting of visual flashes. However, recent studies have shown that periodically moving visual stimuli can substantially improve synchronization compared with visual flashes. In particular, synchronization of a visual bouncing ball that has a uniformly varying velocity was found to be not less stable than synchronization of auditory tones. Here, the current protocol describes the application of the bouncing ball with a uniformly varying velocity in a metronome synchronization task. The usage of the bouncing ball in sequences with different inter-onset intervals (IOI) is included. The representative results illustrate synchronization performance of the bouncing ball, as compared with the performances of auditory tones and visual flashes. Given its comparable synchronization performance to that of auditory tones, the bouncing ball is of particular importance for addressing the current research topic of whether modality-specific mechanisms underlay SMS.

  20. Feature confirmation in object perception: Feature integration theory 26 years on from the Treisman Bartlett lecture.

    PubMed

    Humphreys, Glyn W

    2016-10-01

    The Treisman Bartlett lecture, reported in the Quarterly Journal of Experimental Psychology in 1988, provided a major overview of the feature integration theory of attention. This has continued to be a dominant account of human visual attention to this day. The current paper provides a summary of the work reported in the lecture and an update on critical aspects of the theory as applied to visual object perception. The paper highlights the emergence of findings that pose significant challenges to the theory and which suggest that revisions are required that allow for (a) several rather than a single form of feature integration, (b) some forms of feature integration to operate preattentively, (c) stored knowledge about single objects and interactions between objects to modulate perceptual integration, (d) the application of feature-based inhibition to object files where visual features are specified, which generates feature-based spreading suppression and scene segmentation, and (e) a role for attention in feature confirmation rather than feature integration in visual selection. A feature confirmation account of attention in object perception is outlined.

  1. Separate Capacities for Storing Different Features in Visual Working Memory

    ERIC Educational Resources Information Center

    Wang, Benchi; Cao, Xiaohua; Theeuwes, Jan; Olivers, Christian N. L.; Wang, Zhiguo

    2017-01-01

    Recent empirical and theoretical work suggests that visual features such as color and orientation can be stored or retrieved independently in visual working memory (VWM), even in cases when they belong to the same object. Yet it remains unclear whether different feature dimensions have their own capacity limits, or whether they compete for shared…

  2. Remembering Complex Objects in Visual Working Memory: Do Capacity Limits Restrict Objects or Features?

    PubMed Central

    Hardman, Kyle; Cowan, Nelson

    2014-01-01

    Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PMID:25089739

  3. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    NASA Astrophysics Data System (ADS)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  4. Sensitivity to structure in action sequences: An infant event-related potential study.

    PubMed

    Monroy, Claire D; Gerson, Sarah A; Domínguez-Martínez, Estefanía; Kaduk, Katharina; Hunnius, Sabine; Reid, Vincent

    2017-05-06

    Infants are sensitive to structure and patterns within continuous streams of sensory input. This sensitivity relies on statistical learning, the ability to detect predictable regularities in spatial and temporal sequences. Recent evidence has shown that infants can detect statistical regularities in action sequences they observe, but little is known about the neural process that give rise to this ability. In the current experiment, we combined electroencephalography (EEG) with eye-tracking to identify electrophysiological markers that indicate whether 8-11-month-old infants detect violations to learned regularities in action sequences, and to relate these markers to behavioral measures of anticipation during learning. In a learning phase, infants observed an actor performing a sequence featuring two deterministic pairs embedded within an otherwise random sequence. Thus, the first action of each pair was predictive of what would occur next. One of the pairs caused an action-effect, whereas the second did not. In a subsequent test phase, infants observed another sequence that included deviant pairs, violating the previously observed action pairs. Event-related potential (ERP) responses were analyzed and compared between the deviant and the original action pairs. Findings reveal that infants demonstrated a greater Negative central (Nc) ERP response to the deviant actions for the pair that caused the action-effect, which was consistent with their visual anticipations during the learning phase. Findings are discussed in terms of the neural and behavioral processes underlying perception and learning of structured action sequences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Combining local and global limitations of visual search.

    PubMed

    Põder, Endel

    2017-04-01

    There are different opinions about the roles of local interactions and central processing capacity in visual search. This study attempts to clarify the problem using a new version of relevant set cueing. A central precue indicates two symmetrical segments (that may contain a target object) within a circular array of objects presented briefly around the fixation point. The number of objects in the relevant segments, and density of objects in the array were varied independently. Three types of search experiments were run: (a) search for a simple visual feature (color, size, and orientation); (b) conjunctions of simple features; and (c) spatial configuration of simple features (rotated Ts). For spatial configuration stimuli, the results were consistent with a fixed global processing capacity and standard crowding zones. For simple features and their conjunctions, the results were different, dependent on the features involved. While color search exhibits virtually no capacity limits or crowding, search for an orientation target was limited by both. Results for conjunctions of features can be partly explained by the results from the respective features. This study shows that visual search is limited by both local interference and global capacity, and the limitations are different for different visual features.

  6. Deep Multimodal Distance Metric Learning Using Click Constraints for Image Ranking.

    PubMed

    Yu, Jun; Yang, Xiaokang; Gao, Fei; Tao, Dacheng

    2017-12-01

    How do we retrieve images accurately? Also, how do we rank a group of images precisely and efficiently for specific queries? These problems are critical for researchers and engineers to generate a novel image searching engine. First, it is important to obtain an appropriate description that effectively represent the images. In this paper, multimodal features are considered for describing images. The images unique properties are reflected by visual features, which are correlated to each other. However, semantic gaps always exist between images visual features and semantics. Therefore, we utilize click feature to reduce the semantic gap. The second key issue is learning an appropriate distance metric to combine these multimodal features. This paper develops a novel deep multimodal distance metric learning (Deep-MDML) method. A structured ranking model is adopted to utilize both visual and click features in distance metric learning (DML). Specifically, images and their related ranking results are first collected to form the training set. Multimodal features, including click and visual features, are collected with these images. Next, a group of autoencoders is applied to obtain initially a distance metric in different visual spaces, and an MDML method is used to assign optimal weights for different modalities. Next, we conduct alternating optimization to train the ranking model, which is used for the ranking of new queries with click features. Compared with existing image ranking methods, the proposed method adopts a new ranking model to use multimodal features, including click features and visual features in DML. We operated experiments to analyze the proposed Deep-MDML in two benchmark data sets, and the results validate the effects of the method.

  7. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    NASA Astrophysics Data System (ADS)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  8. Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.

    PubMed

    Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu

    2017-01-01

    Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.

  9. CFGP: a web-based, comparative fungal genomics platform

    PubMed Central

    Park, Jongsun; Park, Bongsoo; Jung, Kyongyong; Jang, Suwang; Yu, Kwangyul; Choi, Jaeyoung; Kong, Sunghyung; Park, Jaejin; Kim, Seryun; Kim, Hyojeong; Kim, Soonok; Kim, Jihyun F.; Blair, Jaime E.; Lee, Kwangwon; Kang, Seogchan; Lee, Yong-Hwan

    2008-01-01

    Since the completion of the Saccharomyces cerevisiae genome sequencing project in 1996, the genomes of over 80 fungal species have been sequenced or are currently being sequenced. Resulting data provide opportunities for studying and comparing fungal biology and evolution at the genome level. To support such studies, the Comparative Fungal Genomics Platform (CFGP; http://cfgp.snu.ac.kr), a web-based multifunctional informatics workbench, was developed. The CFGP comprises three layers, including the basal layer, middleware and the user interface. The data warehouse in the basal layer contains standardized genome sequences of 65 fungal species. The middleware processes queries via six analysis tools, including BLAST, ClustalW, InterProScan, SignalP 3.0, PSORT II and a newly developed tool named BLASTMatrix. The BLASTMatrix permits the identification and visualization of genes homologous to a query across multiple species. The Data-driven User Interface (DUI) of the CFGP was built on a new concept of pre-collecting data and post-executing analysis instead of the ‘fill-in-the-form-and-press-SUBMIT’ user interfaces utilized by most bioinformatics sites. A tool termed Favorite, which supports the management of encapsulated sequence data and provides a personalized data repository to users, is another novel feature in the DUI. PMID:17947331

  10. Visual words for lip-reading

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad B. A.; Jassim, Sabah

    2010-04-01

    In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.

  11. SigWin-detector: a Grid-enabled workflow for discovering enriched windows of genomic features related to DNA sequences.

    PubMed

    Inda, Márcia A; van Batenburg, Marinus F; Roos, Marco; Belloum, Adam S Z; Vasunin, Dmitry; Wibisono, Adianto; van Kampen, Antoine H C; Breit, Timo M

    2008-08-08

    Chromosome location is often used as a scaffold to organize genomic information in both the living cell and molecular biological research. Thus, ever-increasing amounts of data about genomic features are stored in public databases and can be readily visualized by genome browsers. To perform in silico experimentation conveniently with this genomics data, biologists need tools to process and compare datasets routinely and explore the obtained results interactively. The complexity of such experimentation requires these tools to be based on an e-Science approach, hence generic, modular, and reusable. A virtual laboratory environment with workflows, workflow management systems, and Grid computation are therefore essential. Here we apply an e-Science approach to develop SigWin-detector, a workflow-based tool that can detect significantly enriched windows of (genomic) features in a (DNA) sequence in a fast and reproducible way. For proof-of-principle, we utilize a biological use case to detect regions of increased and decreased gene expression (RIDGEs and anti-RIDGEs) in human transcriptome maps. We improved the original method for RIDGE detection by replacing the costly step of estimation by random sampling with a faster analytical formula for computing the distribution of the null hypothesis being tested and by developing a new algorithm for computing moving medians. SigWin-detector was developed using the WS-VLAM workflow management system and consists of several reusable modules that are linked together in a basic workflow. The configuration of this basic workflow can be adapted to satisfy the requirements of the specific in silico experiment. As we show with the results from analyses in the biological use case on RIDGEs, SigWin-detector is an efficient and reusable Grid-based tool for discovering windows enriched for features of a particular type in any sequence of values. Thus, SigWin-detector provides the proof-of-principle for the modular e-Science based concept of integrative bioinformatics experimentation.

  12. Visualizing and Clustering Protein Similarity Networks: Sequences, Structures, and Functions.

    PubMed

    Mai, Te-Lun; Hu, Geng-Ming; Chen, Chi-Ming

    2016-07-01

    Research in the recent decade has demonstrated the usefulness of protein network knowledge in furthering the study of molecular evolution of proteins, understanding the robustness of cells to perturbation, and annotating new protein functions. In this study, we aimed to provide a general clustering approach to visualize the sequence-structure-function relationship of protein networks, and investigate possible causes for inconsistency in the protein classifications based on sequences, structures, and functions. Such visualization of protein networks could facilitate our understanding of the overall relationship among proteins and help researchers comprehend various protein databases. As a demonstration, we clustered 1437 enzymes by their sequences and structures using the minimum span clustering (MSC) method. The general structure of this protein network was delineated at two clustering resolutions, and the second level MSC clustering was found to be highly similar to existing enzyme classifications. The clustering of these enzymes based on sequence, structure, and function information is consistent with each other. For proteases, the Jaccard's similarity coefficient is 0.86 between sequence and function classifications, 0.82 between sequence and structure classifications, and 0.78 between structure and function classifications. From our clustering results, we discussed possible examples of divergent evolution and convergent evolution of enzymes. Our clustering approach provides a panoramic view of the sequence-structure-function network of proteins, helps visualize the relation between related proteins intuitively, and is useful in predicting the structure and function of newly determined protein sequences.

  13. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  14. Impact of feature saliency on visual category learning.

    PubMed

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.

  15. Impact of feature saliency on visual category learning

    PubMed Central

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220

  16. Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model

    PubMed Central

    Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki

    2013-01-01

    Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628

  17. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  18. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  19. Visual feature integration with an attention deficit.

    PubMed

    Arguin, M; Cavanagh, P; Joanette, Y

    1994-01-01

    Treisman's feature integration theory proposes that the perception of illusory conjunctions of correctly encoded visual features is due to the failure of an attentional process. This hypothesis was examined by studying brain-damaged subjects who had previously been shown to have difficulty in attending to contralesional stimulation. These subjects exhibited a massive feature integration deficit for contralesional stimulation relative to ipsilesional displays. In contrast, both normal age-matched controls and brain-damaged subjects who did not exhibit any evidence of an attention deficit showed comparable feature integration performance with left- and right-hemifield stimulation. These observations indicate the crucial function of attention for visual feature integration in normal perception.

  20. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  1. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    PubMed

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.

  2. Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) is the software tool for use in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight-code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover-predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (IDD) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities. Thus, RSVP, being highly data driven, may be tailored to other missions with minimal effort. In addition, RSVP uses a distributed, message-passing architecture to allow multitasking, and collaborative visualization and sequence development by scattered team members.

  3. Feature and Region Selection for Visual Learning.

    PubMed

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.

  4. The research on multi-projection correction based on color coding grid array

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu

    2017-10-01

    There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.

  5. Cross-Modal Facilitation in Speech Prosody

    ERIC Educational Resources Information Center

    Foxton, Jessica M.; Riviere, Louis-David; Barone, Pascal

    2010-01-01

    Speech prosody has traditionally been considered solely in terms of its auditory features, yet correlated visual features exist, such as head and eyebrow movements. This study investigated the extent to which visual prosodic features are able to affect the perception of the auditory features. Participants were presented with videos of a speaker…

  6. Influence of time and length size feature selections for human activity sequences recognition.

    PubMed

    Fang, Hongqing; Chen, Long; Srinivasan, Raghavendiran

    2014-01-01

    In this paper, Viterbi algorithm based on a hidden Markov model is applied to recognize activity sequences from observed sensors events. Alternative features selections of time feature values of sensors events and activity length size feature values are tested, respectively, and then the results of activity sequences recognition performances of Viterbi algorithm are evaluated. The results show that the selection of larger time feature values of sensor events and/or smaller activity length size feature values will generate relatively better results on the activity sequences recognition performances. © 2013 ISA Published by ISA All rights reserved.

  7. Attentive Tracking Disrupts Feature Binding in Visual Working Memory

    PubMed Central

    Fougnie, Daryl; Marois, René

    2009-01-01

    One of the most influential theories in visual cognition proposes that attention is necessary to bind different visual features into coherent object percepts (Treisman & Gelade, 1980). While considerable evidence supports a role for attention in perceptual feature binding, whether attention plays a similar function in visual working memory (VWM) remains controversial. To test the attentional requirements of VWM feature binding, here we gave participants an attention-demanding multiple object tracking task during the retention interval of a VWM task. Results show that the tracking task disrupted memory for color-shape conjunctions above and beyond any impairment to working memory for object features, and that this impairment was larger when the VWM stimuli were presented at different spatial locations. These results demonstrate that the role of visuospatial attention in feature binding is not unique to perception, but extends to the working memory of these perceptual representations as well. PMID:19609460

  8. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  9. Long-term Recurrent Convolutional Networks for Visual Recognition and Description

    DTIC Science & Technology

    2014-11-17

    deep???, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large...models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent...limitation of simple RNN models which strictly integrate state information over time is known as the “vanishing gradient” effect : the ability to

  10. Working memory resources are shared across sensory modalities.

    PubMed

    Salmela, V R; Moisala, M; Alho, K

    2014-10-01

    A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.

  11. Trajectory Recognition as the Basis for Object Individuation: A Functional Model of Object File Instantiation and Object-Token Encoding

    PubMed Central

    Fields, Chris

    2011-01-01

    The perception of persisting visual objects is mediated by transient intermediate representations, object files, that are instantiated in response to some, but not all, visual trajectories. The standard object file concept does not, however, provide a mechanism sufficient to account for all experimental data on visual object persistence, object tracking, and the ability to perceive spatially disconnected stimuli as continuously existing objects. Based on relevant anatomical, functional, and developmental data, a functional model is constructed that bases visual object individuation on the recognition of temporal sequences of apparent center-of-mass positions that are specifically identified as trajectories by dedicated “trajectory recognition networks” downstream of the medial–temporal motion-detection area. This model is shown to account for a wide range of data, and to generate a variety of testable predictions. Individual differences in the recognition, abstraction, and encoding of trajectory information are expected to generate distinct object persistence judgments and object recognition abilities. Dominance of trajectory information over feature information in stored object tokens during early infancy, in particular, is expected to disrupt the ability to re-identify human and other individuals across perceptual episodes, and lead to developmental outcomes with characteristics of autism spectrum disorders. PMID:21716599

  12. Visual scan-path analysis with feature space transient fixation moments

    NASA Astrophysics Data System (ADS)

    Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong

    2003-05-01

    The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.

  13. Medical image diagnoses by artificial neural networks with image correlation, wavelet transform, simulated annealing

    NASA Astrophysics Data System (ADS)

    Szu, Harold H.

    1993-09-01

    Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.

  14. Attention capture without awareness in a non-spatial selection task.

    PubMed

    Oriet, Chris; Pandey, Mamata; Kawahara, Jun-Ichiro

    2017-02-01

    Distractors presented prior to a critical target in a rapid sequence of visually-presented items induce a lag-dependent deficit in target identification, particularly when the distractor shares a task-relevant feature of the target. Presumably, such capture of central attention is important for bringing a target into awareness. The results of the present investigation suggest that greater capture of attention by a distractor is not accompanied by greater awareness of it. Moreover, awareness tends to be limited to superficial characteristics of the target such as colour. The findings are interpreted within the context of a model that assumes sudden increases in arousal trigger selection of information for consolidation in working memory. In this conceptualization, prolonged analysis of distractor items sharing task-relevant features leads to larger target identification deficits (i.e., greater capture) but no increase in awareness. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Visual Saliency Detection Based on Multiscale Deep CNN Features.

    PubMed

    Guanbin Li; Yizhou Yu

    2016-11-01

    Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. The penultimate layer of our neural network has been confirmed to be a discriminative high-level feature vector for saliency detection, which we call deep contrast feature. To generate a more robust feature, we integrate handcrafted low-level features with our deep contrast feature. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving the state-of-the-art performance on all public benchmarks, improving the F-measure by 6.12% and 10%, respectively, on the DUT-OMRON data set and our new data set (HKU-IS), and lowering the mean absolute error by 9% and 35.3%, respectively, on these two data sets.

  16. Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory.

    PubMed

    Huynh, Duong L; Tripathy, Srimant P; Bedell, Harold E; Ögmen, Haluk

    2015-01-01

    Human memory is content addressable-i.e., contents of the memory can be accessed using partial information about the bound features of a stored item. In this study, we used a cross-feature cuing technique to examine how the human visual system encodes, binds, and retains information about multiple stimulus features within a set of moving objects. We sought to characterize the roles of three different features (position, color, and direction of motion, the latter two of which are processed preferentially within the ventral and dorsal visual streams, respectively) in the construction and maintenance of object representations. We investigated the extent to which these features are bound together across the following processing stages: during stimulus encoding, sensory (iconic) memory, and visual short-term memory. Whereas all features examined here can serve as cues for addressing content, their effectiveness shows asymmetries and varies according to cue-report pairings and the stage of information processing and storage. Position-based indexing theories predict that position should be more effective as a cue compared to other features. While we found a privileged role for position as a cue at the stimulus-encoding stage, position was not the privileged cue at the sensory and visual short-term memory stages. Instead, the pattern that emerged from our findings is one that mirrors the parallel processing streams in the visual system. This stream-specific binding and cuing effectiveness manifests itself in all three stages of information processing examined here. Finally, we find that the Leaky Flask model proposed in our previous study is applicable to all three features.

  17. Remembering complex objects in visual working memory: do capacity limits restrict objects or features?

    PubMed

    Hardman, Kyle O; Cowan, Nelson

    2015-03-01

    Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli that possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  18. MobilomeFINDER: web-based tools for in silico and experimental discovery of bacterial genomic islands

    PubMed Central

    Ou, Hong-Yu; He, Xinyi; Harrison, Ewan M.; Kulasekara, Bridget R.; Thani, Ali Bin; Kadioglu, Aras; Lory, Stephen; Hinton, Jay C. D.; Barer, Michael R.; Rajakumar, Kumar

    2007-01-01

    MobilomeFINDER (http://mml.sjtu.edu.cn/MobilomeFINDER) is an interactive online tool that facilitates bacterial genomic island or ‘mobile genome’ (mobilome) discovery; it integrates the ArrayOme and tRNAcc software packages. ArrayOme utilizes a microarray-derived comparative genomic hybridization input data set to generate ‘inferred contigs’ produced by merging adjacent genes classified as ‘present’. Collectively these ‘fragments’ represent a hypothetical ‘microarray-visualized genome (MVG)’. ArrayOme permits recognition of discordances between physical genome and MVG sizes, thereby enabling identification of strains rich in microarray-elusive novel genes. Individual tRNAcc tools facilitate automated identification of genomic islands by comparative analysis of the contents and contexts of tRNA sites and other integration hotspots in closely related sequenced genomes. Accessory tools facilitate design of hotspot-flanking primers for in silico and/or wet-science-based interrogation of cognate loci in unsequenced strains and analysis of islands for features suggestive of foreign origins; island-specific and genome-contextual features are tabulated and represented in schematic and graphical forms. To date we have used MobilomeFINDER to analyse several Enterobacteriaceae, Pseudomonas aeruginosa and Streptococcus suis genomes. MobilomeFINDER enables high-throughput island identification and characterization through increased exploitation of emerging sequence data and PCR-based profiling of unsequenced test strains; subsequent targeted yeast recombination-based capture permits full-length sequencing and detailed functional studies of novel genomic islands. PMID:17537813

  19. iFeature: a python package and web server for features extraction and selection from protein and peptide sequences.

    PubMed

    Chen, Zhen; Zhao, Pei; Li, Fuyi; Leier, André; Marquez-Lago, Tatiana T; Wang, Yanan; Webb, Geoffrey I; Smith, A Ian; Daly, Roger J; Chou, Kuo-Chen; Song, Jiangning

    2018-03-08

    Structural and physiochemical descriptors extracted from sequence data have been widely used to represent sequences and predict structural, functional, expression and interaction profiles of proteins and peptides as well as DNAs/RNAs. Here, we present iFeature, a versatile Python-based toolkit for generating various numerical feature representation schemes for both protein and peptide sequences. iFeature is capable of calculating and extracting a comprehensive spectrum of 18 major sequence encoding schemes that encompass 53 different types of feature descriptors. It also allows users to extract specific amino acid properties from the AAindex database. Furthermore, iFeature integrates 12 different types of commonly used feature clustering, selection, and dimensionality reduction algorithms, greatly facilitating training, analysis, and benchmarking of machine-learning models. The functionality of iFeature is made freely available via an online web server and a stand-alone toolkit. http://iFeature.erc.monash.edu/; https://github.com/Superzchen/iFeature/. jiangning.song@monash.edu; kcchou@gordonlifescience.org; roger.daly@monash.edu. Supplementary data are available at Bioinformatics online.

  20. The Role of Attention in the Maintenance of Feature Bindings in Visual Short-term Memory

    ERIC Educational Resources Information Center

    Johnson, Jeffrey S.; Hollingworth, Andrew; Luck, Steven J.

    2008-01-01

    This study examined the role of attention in maintaining feature bindings in visual short-term memory. In a change-detection paradigm, participants attempted to detect changes in the colors and orientations of multiple objects; the changes consisted of new feature values in a feature-memory condition and changes in how existing feature values were…

  1. Is Posner's "beam" the same as Treisman's "glue"?: On the relation between visual orienting and feature integration theory.

    PubMed

    Briand, K A; Klein, R M

    1987-05-01

    In the present study we investigated whether the visually allocated "beam" studied by Posner and others is the same visual attentional resource that performs the role of feature integration in Treisman's model. Subjects were cued to attend to a certain spatial location by a visual cue, and performance at expected and unexpected stimulus locations was compared. Subjects searched for a target letter (R) with distractor letters that either could give rise to illusory conjunctions (PQ) or could not (PB). Results from three separate experiments showed that orienting attention in response to central cues (endogenous orienting) showed similar effects for both conjunction and feature search. However, when attention was oriented with peripheral visual cues (exogenous orienting), conjunction search showed larger effects of attention than did feature search. It is suggested that the attentional systems that are oriented in response to central and peripheral cues may not be the same and that only the latter performs a role in feature integration. Possibilities for future research are discussed.

  2. Compatibility of motion facilitates visuomotor synchronization.

    PubMed

    Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L

    2010-12-01

    Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.

  3. Processing Dynamic Image Sequences from a Moving Sensor.

    DTIC Science & Technology

    1984-02-01

    65 Roadsign Image Sequence ..... ................ ... 70 Roadsign Sequence with Redundant Features .. ........ . 79 Roadsign Subimage...Selected Feature Error Values .. ........ 66 2c. Industrial Image Selected Feature Local Search Values. .. .... 67 3ab. Roadsign Image Error Values...72 3c. Roadsign Image Local Search Values ............. 73 4ab. Roadsign Redundant Feature Error Values. ............ 8 4c. Roadsign

  4. Unconscious analyses of visual scenes based on feature conjunctions.

    PubMed

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  5. Irrelevant reward and selection histories have different influences on task-relevant attentional selection.

    PubMed

    MacLean, Mary H; Giesbrecht, Barry

    2015-07-01

    Task-relevant and physically salient features influence visual selective attention. In the present study, we investigated the influence of task-irrelevant and physically nonsalient reward-associated features on visual selective attention. Two hypotheses were tested: One predicts that the effects of target-defining task-relevant and task-irrelevant features interact to modulate visual selection; the other predicts that visual selection is determined by the independent combination of relevant and irrelevant feature effects. These alternatives were tested using a visual search task that contained multiple targets, placing a high demand on the need for selectivity, and that was data-limited and required unspeeded responses, emphasizing early perceptual selection processes. One week prior to the visual search task, participants completed a training task in which they learned to associate particular colors with a specific reward value. In the search task, the reward-associated colors were presented surrounding targets and distractors, but were neither physically salient nor task-relevant. In two experiments, the irrelevant reward-associated features influenced performance, but only when they were presented in a task-relevant location. The costs induced by the irrelevant reward-associated features were greater when they oriented attention to a target than to a distractor. In a third experiment, we examined the effects of selection history in the absence of reward history and found that the interaction between task relevance and selection history differed, relative to when the features had previously been associated with reward. The results indicate that under conditions that demand highly efficient perceptual selection, physically nonsalient task-irrelevant and task-relevant factors interact to influence visual selective attention.

  6. Digital dissection system for medical school anatomy training

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Pawlina, Wojciech; Carmichael, Stephen W.; Korinek, Mark J.; Schroeder, Kathryn K.; Segovis, Colin M.; Robb, Richard A.

    2003-05-01

    As technology advances, new and innovative ways of viewing and visualizing the human body are developed. Medicine has benefited greatly from imaging modalities that provide ways for us to visualize anatomy that cannot be seen without invasive procedures. As long as medical procedures include invasive operations, students of anatomy will benefit from the cadaveric dissection experience. Teaching proper technique for dissection of human cadavers is a challenging task for anatomy educators. Traditional methods, which have not changed significantly for centuries, include the use of textbooks and pictures to show students what a particular dissection specimen should look like. The ability to properly carry out such highly visual and interactive procedures is significantly constrained by these methods. The student receives a single view and has no idea how the procedure was carried out. The Department of Anatomy at Mayo Medical School recently built a new, state-of-the-art teaching laboratory, including data ports and power sources above each dissection table. This feature allows students to access the Mayo intranet from a computer mounted on each table. The vision of the Department of Anatomy is to replace all paper-based resources in the laboratory (dissection manuals, anatomic atlases, etc.) with a more dynamic medium that will direct students in dissection and in learning human anatomy. Part of that vision includes the use of interactive 3-D visualization technology. The Biomedical Imaging Resource (BIR) at Mayo Clinic has developed, in collaboration with the Department of Anatomy, a system for the control and capture of high resolution digital photographic sequences which can be used to create 3-D interactive visualizations of specimen dissections. The primary components of the system include a Kodak DC290 digital camera, a motorized controller rig from Kaidan, a PC, and custom software to synchronize and control the components. For each dissection procedure, the images are captured automatically, and then processed to generate a Quicktime VR sequence, which permits users to view an object from multiple angles by rotating it on the screen. This provides 3-D visualizations of anatomy for students without the need for special '3-D glasses' that would be impractical to use in a laboratory setting. In addition, a digital video camera may be mounted on the rig for capturing video recordings of selected dissection procedures being carried out by expert anatomists for playback by the students. Anatomists from the Department of Anatomy at Mayo have captured several sets of dissection sequences and processed them into Quicktime VR sequences. The students are able to look at these specimens from multiple angles using this VR technology. In addition, the student may zoom in to obtain high-resolution close-up views of the specimen. They may interactively view the specimen at varying stages of dissection, providing a way to quickly and intuitively navigate through the layers of tissue. Electronic media has begun to impact all areas of education, but a 3-D interactive visualization of specimen dissections in the laboratory environment is a unique and powerful means of teaching anatomy. When fully implemented, anatomy education will be enhanced significantly by comparison to traditional methods.

  7. Classification of visual and linguistic tasks using eye-movement features.

    PubMed

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  8. Bansho: Visually Sequencing Mathematical Ideas

    ERIC Educational Resources Information Center

    Kuehnert, Eloise R. A.; Eddy, Colleen M.; Miller, Daphyne; Pratt, Sarah S.; Senawongsa, Chanika

    2018-01-01

    In this article, the authors describe the Japanese term "bansho," which refers to the intentional use of board space for facilitating student learning. Bansho offers a structure for sequencing mathematics visually on the board. By purposefully organizing the board space alongside the lesson, teachers can provide students with a framework…

  9. Visual system evolution and the nature of the ancestral snake.

    PubMed

    Simões, B F; Sampaio, F L; Jared, C; Antoniazzi, M M; Loew, E R; Bowmaker, J K; Rodriguez, A; Hart, N S; Hunt, D M; Partridge, J C; Gower, D J

    2015-07-01

    The dominant hypothesis for the evolutionary origin of snakes from 'lizards' (non-snake squamates) is that stem snakes acquired many snake features while passing through a profound burrowing (fossorial) phase. To investigate this, we examined the visual pigments and their encoding opsin genes in a range of squamate reptiles, focusing on fossorial lizards and snakes. We sequenced opsin transcripts isolated from retinal cDNA and used microspectrophotometry to measure directly the spectral absorbance of the photoreceptor visual pigments in a subset of samples. In snakes, but not lizards, dedicated fossoriality (as in Scolecophidia and the alethinophidian Anilius scytale) corresponds with loss of all visual opsins other than RH1 (λmax 490-497 nm); all other snakes (including less dedicated burrowers) also have functional sws1 and lws opsin genes. In contrast, the retinas of all lizards sampled, even highly fossorial amphisbaenians with reduced eyes, express functional lws, sws1, sws2 and rh1 genes, and most also express rh2 (i.e. they express all five of the visual opsin genes present in the ancestral vertebrate). Our evidence of visual pigment complements suggests that the visual system of stem snakes was partly reduced, with two (RH2 and SWS2) of the ancestral vertebrate visual pigments being eliminated, but that this did not extend to the extreme additional loss of SWS1 and LWS that subsequently occurred (probably independently) in highly fossorial extant scolecophidians and A. scytale. We therefore consider it unlikely that the ancestral snake was as fossorial as extant scolecophidians, whether or not the latter are para- or monophyletic. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  10. Clinical presentations of X-linked retinoschisis in Taiwanese patients confirmed with genetic sequencing

    PubMed Central

    Liu, Laura; Chen, Ho-Min; Tsai, Shawn; Chang, Tsong-Chi; Tsai, Tzu-Hsun; Yang, Chung-May; Chao, An-Ning; Chen, Kuan-Jen; Kao, Ling-Yuh; Yeung, Ling; Yeh, Lung-Kun; Hwang, Yih-Shiou; Wu, Wei-Chi; Lai, Chi-Chun

    2015-01-01

    Purpose To investigate the clinical characteristics of X-linked retinoschisis (XLRS) and identify genetic mutations in Taiwanese patients with XLRS. Methods This study included 23 affected males from 16 families with XLRS. Fundus photography, spectral domain optical coherent tomography (SD-OCT), fundus autofluorescence (FAF), and full-field electroretinograms (ERGs) were performed. The coding regions of the RS1 gene that encodes retinoschisin were sequenced. Results The median age at diagnosis was 18 years (range 4–58 years). The best-corrected visual acuity ranged from no light perception to 20/25. The typical spoke-wheel pattern in the macula was present in 61% of the patients (14/23) while peripheral retinoschisis was present in 43% of the patients (10/23). Four eyes presented with vitreous hemorrhage, and two eyes presented with leukocoria that mimics Coats’ disease. Macular schisis was identified with SD-OCT in 82% of the eyes (31/38) while foveal atrophy was present in 18% of the eyes (7/38). Concentric area of high intensity was the most common FAF abnormality observed. Seven out of 12 patients (58%) showed electronegative ERG findings. Sequencing of the RS1 gene identified nine mutations, six of which were novel. The mutations are all located in exons 4–6, including six missense mutations, two nonsense mutations, and one deletion-caused frameshift mutation. Conclusions XLRS is a clinically heterogeneous disease with profound phenotypic inter- and intrafamiliar variability. Genetic sequencing is valuable as it allows a definite diagnosis of XLRS to be made without the classical clinical features and ERG findings. This study showed the variety of clinical features of XLRS and reported novel mutations. PMID:25999676

  11. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    PubMed

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  12. Infants' statistical learning: 2- and 5-month-olds' segmentation of continuous visual sequences.

    PubMed

    Slone, Lauren Krogh; Johnson, Scott P

    2015-05-01

    Past research suggests that infants have powerful statistical learning abilities; however, studies of infants' visual statistical learning offer differing accounts of the developmental trajectory of and constraints on this learning. To elucidate this issue, the current study tested the hypothesis that young infants' segmentation of visual sequences depends on redundant statistical cues to segmentation. A sample of 20 2-month-olds and 20 5-month-olds observed a continuous sequence of looming shapes in which unit boundaries were defined by both transitional probability and co-occurrence frequency. Following habituation, only 5-month-olds showed evidence of statistically segmenting the sequence, looking longer to a statistically improbable shape pair than to a probable pair. These results reaffirm the power of statistical learning in infants as young as 5 months but also suggest considerable development of statistical segmentation ability between 2 and 5 months of age. Moreover, the results do not support the idea that infants' ability to segment visual sequences based on transitional probabilities and/or co-occurrence frequencies is functional at the onset of visual experience, as has been suggested previously. Rather, this type of statistical segmentation appears to be constrained by the developmental state of the learner. Factors contributing to the development of statistical segmentation ability during early infancy, including memory and attention, are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Spelling: A Visual Skill.

    ERIC Educational Resources Information Center

    Hendrickson, Homer

    1988-01-01

    Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…

  14. The processing of images of biological threats in visual short-term memory.

    PubMed

    Quinlan, Philip T; Yue, Yue; Cohen, Dale J

    2017-08-30

    The idea that there is enhanced memory for negatively, emotionally charged pictures was examined. Performance was measured under rapid, serial visual presentation (RSVP) conditions in which, on every trial, a sequence of six photo-images was presented. Briefly after the offset of the sequence, two alternative images (a target and a foil) were presented and participants attempted to choose which image had occurred in the sequence. Images were of threatening and non-threatening cats and dogs. The target depicted either an animal expressing an emotion distinct from the other images, or the sequences contained only images depicting the same emotional valence. Enhanced memory was found for targets that differed in emotional valence from the other sequence images, compared to targets that expressed the same emotional valence. Further controls in stimulus selection were then introduced and the same emotional distinctiveness effect obtained. In ruling out possible visual and attentional accounts of the data, an informal dual route topic model is discussed. This places emphasis on how visual short-term memory reveals a sensitivity to the emotional content of the input as it unfolds over time. Items that present with a distinctive emotional content stand out in memory. © 2017 The Author(s).

  15. Memory as embodiment: The case of modality and serial short-term memory.

    PubMed

    Macken, Bill; Taylor, John C; Kozlov, Michail D; Hughes, Robert W; Jones, Dylan M

    2016-10-01

    Classical explanations for the modality effect-superior short-term serial recall of auditory compared to visual sequences-typically recur to privileged processing of information derived from auditory sources. Here we critically appraise such accounts, and re-evaluate the nature of the canonical empirical phenomena that have motivated them. Three experiments show that the standard account of modality in memory is untenable, since auditory superiority in recency is often accompanied by visual superiority in mid-list serial positions. We explain this simultaneous auditory and visual superiority by reference to the way in which perceptual objects are formed in the two modalities and how those objects are mapped to speech motor forms to support sequence maintenance and reproduction. Specifically, stronger obligatory object formation operating in the standard auditory form of sequence presentation compared to that for visual sequences leads both to enhanced addressability of information at the object boundaries and reduced addressability for that in the interior. Because standard visual presentation does not lead to such object formation, such sequences do not show the boundary advantage observed for auditory presentation, but neither do they suffer loss of addressability associated with object information, thereby affording more ready mapping of that information into a rehearsal cohort to support recall. We show that a range of factors that impede this perceptual-motor mapping eliminate visual superiority while leaving auditory superiority unaffected. We make a general case for viewing short-term memory as an embodied, perceptual-motor process. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Spatial and temporal coherence in perceptual binding

    PubMed Central

    Blake, Randolph; Yang, Yuede

    1997-01-01

    Component visual features of objects are registered by distributed patterns of activity among neurons comprising multiple pathways and visual areas. How these distributed patterns of activity give rise to unified representations of objects remains unresolved, although one recent, controversial view posits temporal coherence of neural activity as a binding agent. Motivated by the possible role of temporal coherence in feature binding, we devised a novel psychophysical task that requires the detection of temporal coherence among features comprising complex visual images. Results show that human observers can more easily detect synchronized patterns of temporal contrast modulation within hybrid visual images composed of two components when those components are drawn from the same original picture. Evidently, time-varying changes within spatially coherent features produce more salient neural signals. PMID:9192701

  17. Qualification of security printing features

    NASA Astrophysics Data System (ADS)

    Simske, Steven J.; Aronoff, Jason S.; Arnabat, Jordi

    2006-02-01

    This paper describes the statistical and hardware processes involved in qualifying two related printing features for their deployment in product (e.g. document and package) security. The first is a multi-colored tiling feature that can also be combined with microtext to provide additional forms of security protection. The color information is authenticated automatically with a variety of handheld, desktop and production scanners. The microtext is authenticated either following magnification or manually by a field inspector. The second security feature can also be tile-based. It involves the use of two inks that provide the same visual color, but differ in their transparency to infrared (IR) wavelengths. One of the inks is effectively transparent to IR wavelengths, allowing emitted IR light to pass through. The other ink is effectively opaque to IR wavelengths. These inks allow the printing of a seemingly uniform, or spot, color over a (truly) uniform IR emitting ink layer. The combination converts a uniform covert ink and a spot color to a variable data region capable of encoding identification sequences with high density. Also, it allows the extension of variable data printing for security to ostensibly static printed regions, affording greater security protection while meeting branding and marketing specifications.

  18. Differences in Visual-Spatial Input May Underlie Different Compression Properties of Firing Fields for Grid Cell Modules in Medial Entorhinal Cortex

    PubMed Central

    Raudies, Florian; Hasselmo, Michael E.

    2015-01-01

    Firing fields of grid cells in medial entorhinal cortex show compression or expansion after manipulations of the location of environmental barriers. This compression or expansion could be selective for individual grid cell modules with particular properties of spatial scaling. We present a model for differences in the response of modules to barrier location that arise from different mechanisms for the influence of visual features on the computation of location that drives grid cell firing patterns. These differences could arise from differences in the position of visual features within the visual field. When location was computed from the movement of visual features on the ground plane (optic flow) in the ventral visual field, this resulted in grid cell spatial firing that was not sensitive to barrier location in modules modeled with small spacing between grid cell firing fields. In contrast, when location was computed from static visual features on walls of barriers, i.e. in the more dorsal visual field, this resulted in grid cell spatial firing that compressed or expanded based on the barrier locations in modules modeled with large spacing between grid cell firing fields. This indicates that different grid cell modules might have differential properties for computing location based on visual cues, or the spatial radius of sensitivity to visual cues might differ between modules. PMID:26584432

  19. Feature-based memory-driven attentional capture: visual working memory content affects visual attention.

    PubMed

    Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan

    2006-10-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.

  20. Hierarchical acquisition of visual specificity in spatial contextual cueing.

    PubMed

    Lie, Kin-Pou

    2015-01-01

    Spatial contextual cueing refers to visual search performance's being improved when invariant associations between target locations and distractor spatial configurations are learned incidentally. Using the instance theory of automatization and the reverse hierarchy theory of visual perceptual learning, this study explores the acquisition of visual specificity in spatial contextual cueing. Two experiments in which detailed visual features were irrelevant for distinguishing between spatial contexts found that spatial contextual cueing was visually generic in difficult trials when the trials were not preceded by easy trials (Experiment 1) but that spatial contextual cueing progressed to visual specificity when difficult trials were preceded by easy trials (Experiment 2). These findings support reverse hierarchy theory, which predicts that even when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing can progress to visual specificity if the stimuli remain constant, the task is difficult, and difficult trials are preceded by easy trials. However, these findings are inconsistent with instance theory, which predicts that when detailed visual features are irrelevant for distinguishing between spatial contexts, spatial contextual cueing will not progress to visual specificity. This study concludes that the acquisition of visual specificity in spatial contextual cueing is more plausibly hierarchical, rather than instance-based.

  1. Event-Related Potentials Elicited by Pre-Attentive Emotional Changes in Temporal Context

    PubMed Central

    Fujimura, Tomomi; Okanoya, Kazuo

    2013-01-01

    The ability to detect emotional change in the environment is essential for adaptive behavior. The current study investigated whether event-related potentials (ERPs) can reflect emotional change in a visual sequence. To assess pre-attentive processing, we examined visual mismatch negativity (vMMN): the negative potentials elicited by a deviant (infrequent) stimulus embedded in a sequence of standard (frequent) stimuli. Participants in two experiments pre-attentively viewed visual sequences of Japanese kanji with different emotional connotations while ERPs were recorded. The visual sequence in Experiment 1 consisted of neutral standards and two types of emotional deviants with a strong and weak intensity. Although the results indicated that strongly emotional deviants elicited more occipital negativity than neutral standards, it was unclear whether these negativities were derived from emotional deviation in the sequence or from the emotional significance of the deviants themselves. In Experiment 2, the two identical emotional deviants were presented against different emotional standards. One type of deviants was emotionally incongruent with the standard and the other type of deviants was emotionally congruent with the standard. The results indicated that occipital negativities elicited by deviants resulted from perceptual changes in a visual sequence at a latency of 100–200 ms and from emotional changes at latencies of 200–260 ms. Contrary to the results of the ERP experiment, reaction times to deviants showed no effect of emotional context; negative stimuli were consistently detected more rapidly than were positive stimuli. Taken together, the results suggest that brain signals can reflect emotional change in a temporal context. PMID:23671693

  2. Event-related potentials elicited by pre-attentive emotional changes in temporal context.

    PubMed

    Fujimura, Tomomi; Okanoya, Kazuo

    2013-01-01

    The ability to detect emotional change in the environment is essential for adaptive behavior. The current study investigated whether event-related potentials (ERPs) can reflect emotional change in a visual sequence. To assess pre-attentive processing, we examined visual mismatch negativity (vMMN): the negative potentials elicited by a deviant (infrequent) stimulus embedded in a sequence of standard (frequent) stimuli. Participants in two experiments pre-attentively viewed visual sequences of Japanese kanji with different emotional connotations while ERPs were recorded. The visual sequence in Experiment 1 consisted of neutral standards and two types of emotional deviants with a strong and weak intensity. Although the results indicated that strongly emotional deviants elicited more occipital negativity than neutral standards, it was unclear whether these negativities were derived from emotional deviation in the sequence or from the emotional significance of the deviants themselves. In Experiment 2, the two identical emotional deviants were presented against different emotional standards. One type of deviants was emotionally incongruent with the standard and the other type of deviants was emotionally congruent with the standard. The results indicated that occipital negativities elicited by deviants resulted from perceptual changes in a visual sequence at a latency of 100-200 ms and from emotional changes at latencies of 200-260 ms. Contrary to the results of the ERP experiment, reaction times to deviants showed no effect of emotional context; negative stimuli were consistently detected more rapidly than were positive stimuli. Taken together, the results suggest that brain signals can reflect emotional change in a temporal context.

  3. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  4. Human Occipital and Parietal GABA Selectively Influence Visual Perception of Orientation and Size.

    PubMed

    Song, Chen; Sandberg, Kristian; Andersen, Lau Møller; Blicher, Jakob Udby; Rees, Geraint

    2017-09-13

    GABA is the primary inhibitory neurotransmitter in human brain. The level of GABA varies substantially across individuals, and this variability is associated with interindividual differences in visual perception. However, it remains unclear whether the association between GABA level and visual perception reflects a general influence of visual inhibition or whether the GABA levels of different cortical regions selectively influence perception of different visual features. To address this, we studied how the GABA levels of parietal and occipital cortices related to interindividual differences in size, orientation, and brightness perception. We used visual contextual illusion as a perceptual assay since the illusion dissociates perceptual content from stimulus content and the magnitude of the illusion reflects the effect of visual inhibition. Across individuals, we observed selective correlations between the level of GABA and the magnitude of contextual illusion. Specifically, parietal GABA level correlated with size illusion magnitude but not with orientation or brightness illusion magnitude; in contrast, occipital GABA level correlated with orientation illusion magnitude but not with size or brightness illusion magnitude. Our findings reveal a region- and feature-dependent influence of GABA level on human visual perception. Parietal and occipital cortices contain, respectively, topographic maps of size and orientation preference in which neural responses to stimulus sizes and stimulus orientations are modulated by intraregional lateral connections. We propose that these lateral connections may underlie the selective influence of GABA on visual perception. SIGNIFICANCE STATEMENT GABA, the primary inhibitory neurotransmitter in human visual system, varies substantially across individuals. This interindividual variability in GABA level is linked to interindividual differences in many aspects of visual perception. However, the widespread influence of GABA raises the question of whether interindividual variability in GABA reflects an overall variability in visual inhibition and has a general influence on visual perception or whether the GABA levels of different cortical regions have selective influence on perception of different visual features. Here we report a region- and feature-dependent influence of GABA level on human visual perception. Our findings suggest that GABA level of a cortical region selectively influences perception of visual features that are topographically mapped in this region through intraregional lateral connections. Copyright © 2017 Song, Sandberg et al.

  5. SequenceCEROSENE: a computational method and web server to visualize spatial residue neighborhoods at the sequence level.

    PubMed

    Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk

    2016-01-01

    To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial phase of structure-based studies. In this respect, the web server can be a valuable tool, as users are allowed to process multiple structures, quickly switch between results, and interact with generated visualizations in an intuitive manner. The SequenceCEROSENE web server is available at https://biosciences.hs-mittweida.de/seqcerosene.

  6. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery.

    PubMed

    Roldan, Stephanie M

    2017-01-01

    One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation.

  7. Object Recognition in Mental Representations: Directions for Exploring Diagnostic Features through Visual Mental Imagery

    PubMed Central

    Roldan, Stephanie M.

    2017-01-01

    One of the fundamental goals of object recognition research is to understand how a cognitive representation produced from the output of filtered and transformed sensory information facilitates efficient viewer behavior. Given that mental imagery strongly resembles perceptual processes in both cortical regions and subjective visual qualities, it is reasonable to question whether mental imagery facilitates cognition in a manner similar to that of perceptual viewing: via the detection and recognition of distinguishing features. Categorizing the feature content of mental imagery holds potential as a reverse pathway by which to identify the components of a visual stimulus which are most critical for the creation and retrieval of a visual representation. This review will examine the likelihood that the information represented in visual mental imagery reflects distinctive object features thought to facilitate efficient object categorization and recognition during perceptual viewing. If it is the case that these representational features resemble their sensory counterparts in both spatial and semantic qualities, they may well be accessible through mental imagery as evaluated through current investigative techniques. In this review, methods applied to mental imagery research and their findings are reviewed and evaluated for their efficiency in accessing internal representations, and implications for identifying diagnostic features are discussed. An argument is made for the benefits of combining mental imagery assessment methods with diagnostic feature research to advance the understanding of visual perceptive processes, with suggestions for avenues of future investigation. PMID:28588538

  8. A novel visual saliency analysis model based on dynamic multiple feature combination strategy

    NASA Astrophysics Data System (ADS)

    Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao

    2017-06-01

    The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.

  9. Task-relevant perceptual features can define categories in visual memory too.

    PubMed

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  10. Collinearity Impairs Local Element Visual Search

    ERIC Educational Resources Information Center

    Jingling, Li; Tseng, Chia-Huei

    2013-01-01

    In visual searches, stimuli following the law of good continuity attract attention to the global structure and receive attentional priority. Also, targets that have unique features are of high feature contrast and capture attention in visual search. We report on a salient global structure combined with a high orientation contrast to the…

  11. Interactive Visualization of Assessment Data: The Software Package Mondrian

    ERIC Educational Resources Information Center

    Unlu, Ali; Sargin, Anatol

    2009-01-01

    Mondrian is state-of-the-art statistical data visualization software featuring modern interactive visualization techniques for a wide range of data types. This article reviews the capabilities, functionality, and interactive properties of this software package. Key features of Mondrian are illustrated with data from the Programme for International…

  12. Disturbance of visual search by stimulating to posterior parietal cortex in the brain using transcranial magnetic stimulation

    NASA Astrophysics Data System (ADS)

    Iramina, Keiji; Ge, Sheng; Hyodo, Akira; Hayami, Takehito; Ueno, Shoogo

    2009-04-01

    In this study, we applied a transcranial magnetic stimulation (TMS) to investigate the temporal aspect for the functional processing of visual attention. Although it has been known that right posterior parietal cortex (PPC) in the brain has a role in certain visual search tasks, there is little knowledge about the temporal aspect of this area. Three visual search tasks that have different difficulties of task execution individually were carried out. These three visual search tasks are the "easy feature task," the "hard feature task," and the "conjunction task." To investigate the temporal aspect of the PPC involved in the visual search, we applied various stimulus onset asynchronies (SOAs) and measured the reaction time of the visual search. The magnetic stimulation was applied on the right PPC or the left PPC by the figure-eight coil. The results show that the reaction times of the hard feature task are longer than those of the easy feature task. When SOA=150 ms, compared with no-TMS condition, there was a significant increase in target-present reaction time when TMS pulses were applied. We considered that the right PPC was involved in the visual search at about SOA=150 ms after visual stimulus presentation. The magnetic stimulation to the right PPC disturbed the processing of the visual search. However, the magnetic stimulation to the left PPC gives no effect on the processing of the visual search.

  13. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.

    2000-01-01

    Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

  14. The advanced glaucoma intervention study, 6: effect of cataract on visual field and visual acuity. The AGIS Investigators.

    PubMed

    2000-12-01

    To investigate the effect of cataract on visual function and the role of cataract in explaining a race-treatment interaction in outcomes of glaucoma surgery. The Advanced Glaucoma Intervention Study (AGIS) enrolled 332 black patients (451 eyes) and 249 white patients (325 eyes) with advanced glaucoma. Eyes were randomly assigned to an argon laser trabeculoplasty (ALT)-trabeculectomy-trabeculectomy sequence or a trabeculectomy-ALT-trabeculectomy sequence. From the AGIS experience with cataract surgery during follow-up, we estimated the expected change in visual function scores from before cataract surgery to after cataract surgery. Then, for eyes with cataract not removed, we used these estimates of expected change to adjust visual function scores for the presumed effects of cataract. In turn, we used the adjusted scores to obtain cataract-adjusted main outcome measures. Average percent of eyes with decrease of visual field (APDVF) and average percent of eyes with decrease of visual acuity (APDVA). Within the 2 months before cataract surgery, visual acuity was better in eyes of white patients than of black patients by an average of approximately 2 lines on the visual acuity test chart. Cataract surgery improved visual acuity and visual field defect scores, with the amounts of improvement greater when preoperative visual acuity was lower. Adjustments for cataract brought about the following relative reductions: for APDVF, a relative reduction of 5% to 11% in black patients and 9% to 11% in white patients; for APDVA, a relative reduction of 45% to 49% in black patients and 31% to 38% in white patients; and for the APDVF and APDVA race-treatment interactions, relative reductions of 25% and 45%, respectively. On average, visual function scores improved after cataract surgery. The findings of reduced race-treatment interactions after adjustment for cataract do not alter our earlier conclusion that the AGIS 7-year results support use of the ALT-trabeculectomy-trabeculectomy sequence for black patients and of the trabeculectomy-ALT-trabeculectomy sequence for white patients without life-threatening health problems. The choice of treatment should take into account individual patient characteristics and needs.

  15. Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations

    PubMed Central

    Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint

    2016-01-01

    Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606

  16. Dynamic Integration of Task-Relevant Visual Features in Posterior Parietal Cortex

    PubMed Central

    Freedman, David J.

    2014-01-01

    Summary The primate visual system consists of multiple hierarchically organized cortical areas, each specialized for processing distinct aspects of the visual scene. For example, color and form are encoded in ventral pathway areas such as V4 and inferior temporal cortex, while motion is preferentially processed in dorsal pathway areas such as the middle temporal area. Such representations often need to be integrated perceptually to solve tasks which depend on multiple features. We tested the hypothesis that the lateral intraparietal area (LIP) integrates disparate task-relevant visual features by recording from LIP neurons in monkeys trained to identify target stimuli composed of conjunctions of color and motion features. We show that LIP neurons exhibit integrative representations of both color and motion features when they are task relevant, and task-dependent shifts of both direction and color tuning. This suggests that LIP plays a role in flexibly integrating task-relevant sensory signals. PMID:25199703

  17. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  18. Visual Working Memory Is Independent of the Cortical Spacing Between Memoranda.

    PubMed

    Harrison, William J; Bays, Paul M

    2018-03-21

    The sensory recruitment hypothesis states that visual short-term memory is maintained in the same visual cortical areas that initially encode a stimulus' features. Although it is well established that the distance between features in visual cortex determines their visibility, a limitation known as crowding, it is unknown whether short-term memory is similarly constrained by the cortical spacing of memory items. Here, we investigated whether the cortical spacing between sequentially presented memoranda affects the fidelity of memory in humans (of both sexes). In a first experiment, we varied cortical spacing by taking advantage of the log-scaling of visual cortex with eccentricity, presenting memoranda in peripheral vision sequentially along either the radial or tangential visual axis with respect to the fovea. In a second experiment, we presented memoranda sequentially either within or beyond the critical spacing of visual crowding, a distance within which visual features cannot be perceptually distinguished due to their nearby cortical representations. In both experiments and across multiple measures, we found strong evidence that the ability to maintain visual features in memory is unaffected by cortical spacing. These results indicate that the neural architecture underpinning working memory has properties inconsistent with the known behavior of sensory neurons in visual cortex. Instead, the dissociation between perceptual and memory representations supports a role of higher cortical areas such as posterior parietal or prefrontal regions or may involve an as yet unspecified mechanism in visual cortex in which stimulus features are bound to their temporal order. SIGNIFICANCE STATEMENT Although much is known about the resolution with which we can remember visual objects, the cortical representation of items held in short-term memory remains contentious. A popular hypothesis suggests that memory of visual features is maintained via the recruitment of the same neural architecture in sensory cortex that encodes stimuli. We investigated this claim by manipulating the spacing in visual cortex between sequentially presented memoranda such that some items shared cortical representations more than others while preventing perceptual interference between stimuli. We found clear evidence that short-term memory is independent of the intracortical spacing of memoranda, revealing a dissociation between perceptual and memory representations. Our data indicate that working memory relies on different neural mechanisms from sensory perception. Copyright © 2018 Harrison and Bays.

  19. Genonets server-a web server for the construction, analysis and visualization of genotype networks.

    PubMed

    Khalid, Fahad; Aguilar-Rodríguez, José; Wagner, Andreas; Payne, Joshua L

    2016-07-08

    A genotype network is a graph in which vertices represent genotypes that have the same phenotype. Edges connect vertices if their corresponding genotypes differ in a single small mutation. Genotype networks are used to study the organization of genotype spaces. They have shed light on the relationship between robustness and evolvability in biological systems as different as RNA macromolecules and transcriptional regulatory circuits. Despite the importance of genotype networks, no tool exists for their automatic construction, analysis and visualization. Here we fill this gap by presenting the Genonets Server, a tool that provides the following features: (i) the construction of genotype networks for categorical and univariate phenotypes from DNA, RNA, amino acid or binary sequences; (ii) analyses of genotype network topology and how it relates to robustness and evolvability, as well as analyses of genotype network topography and how it relates to the navigability of a genotype network via mutation and natural selection; (iii) multiple interactive visualizations that facilitate exploratory research and education. The Genonets Server is freely available at http://ieu-genonets.uzh.ch. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Mid-IR Spectra Herbig Ae/Be Stars

    NASA Technical Reports Server (NTRS)

    Wooden, Diane; Witteborn, Fred C. (Technical Monitor)

    1997-01-01

    Herbig Ae/Be stars are intermediate mass pre-main sequence stars, the higher mass analogues to the T Tauri stars. Because of their higher mass, they are expected form more rapidly than the T Tauri stars. Whether the Herbig Ae/Be stars accrete only from collapsing infalling envelopes or whether accrete through geometrically flattened viscous accretion disks is of current debate. When the Herbig Ae/Be stars reach the main sequence they form a class called Vega-like stars which are known from their IR excesses to have debris disks, such as the famous beta Pictoris. The evolutionary scenario between the pre-main sequence Herbig Ae/Be stars and the main sequence Vega-like stars is not yet revealed and it bears on the possibility of the presence of Habitable Zone planets around the A stars. Photometric studies of Herbig Ae/Be stars have revealed that most are variable in the optical, and a subset of stars show non-periodic drops of about 2 magnitudes. These drops in visible light are accompanied by changes in their colors: at first the starlight becomes reddened, and then it becomes bluer, the polarization goes from less than 0.1 % to roughly 1% during these minima. The theory postulated by V. Grinnin is that large cometary bodies on highly eccentric orbits occult the star on their way to being sublimed, for systems that are viewed edge-on. This theory is one of several controversial theories about the nature of Herbig Ae/Be stars. A 5 year mid-IR spectrophotometric monitoring campaign was begun by Wooden and Butner in 1992 to look for correlations between the variations in visible photometry and mid-IR dust emission features. Generally the approximately 20 stars that have been observed by the NASA Ames HIFOGS spectrometer have been steady at 10 microns. There are a handful, however, that have shown variable mid-IR spectra, with 2 showing variations in both the continuum and features anti-correlated with visual photometry, and 3 showing variations in the emission features only while the continuum level remained unchanged. The first 2 stars mentioned probably have reprocessing envelopes. The other 3 stars gives important clues to the controversy over the geometry of the gas and dust around these pre-main sequence stars: the steady underlying 10 microns continuum and variable features indicates that an optically thick continuum probably arising from an accretion disk is decoupled from the optically thin emission features which may arise in a disk atmosphere. Bernadette Rodgers has joined this monitoring campaign in the near-IR using GRIMII with the goal of detecting variations in the hot dust continuum and the gas density in the dense accretion region close to these stars.

  1. Visual feature-tolerance in the reading network.

    PubMed

    Rauschecker, Andreas M; Bowen, Reno F; Perry, Lee M; Kevan, Alison M; Dougherty, Robert F; Wandell, Brian A

    2011-09-08

    A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis.

    PubMed

    Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab

    2012-01-01

    RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. http://www.cemb.edu.pk/sw.html RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language.

  3. repDNA: a Python package to generate various modes of feature vectors for DNA sequences by incorporating user-defined physicochemical properties and sequence-order effects.

    PubMed

    Liu, Bin; Liu, Fule; Fang, Longyun; Wang, Xiaolong; Chou, Kuo-Chen

    2015-04-15

    In order to develop powerful computational predictors for identifying the biological features or attributes of DNAs, one of the most challenging problems is to find a suitable approach to effectively represent the DNA sequences. To facilitate the studies of DNAs and nucleotides, we developed a Python package called representations of DNAs (repDNA) for generating the widely used features reflecting the physicochemical properties and sequence-order effects of DNAs and nucleotides. There are three feature groups composed of 15 features. The first group calculates three nucleic acid composition features describing the local sequence information by means of kmers; the second group calculates six autocorrelation features describing the level of correlation between two oligonucleotides along a DNA sequence in terms of their specific physicochemical properties; the third group calculates six pseudo nucleotide composition features, which can be used to represent a DNA sequence with a discrete model or vector yet still keep considerable sequence-order information via the physicochemical properties of its constituent oligonucleotides. In addition, these features can be easily calculated based on both the built-in and user-defined properties via using repDNA. The repDNA Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/repDNA/. bliu@insun.hit.edu.cn or kcchou@gordonlifescience.org Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Recent results in visual servoing

    NASA Astrophysics Data System (ADS)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  5. CodonLogo: a sequence logo-based viewer for codon patterns.

    PubMed

    Sharma, Virag; Murphy, David P; Provan, Gregory; Baranov, Pavel V

    2012-07-15

    Conserved patterns across a multiple sequence alignment can be visualized by generating sequence logos. Sequence logos show each column in the alignment as stacks of symbol(s) where the height of a stack is proportional to its informational content, whereas the height of each symbol within the stack is proportional to its frequency in the column. Sequence logos use symbols of either nucleotide or amino acid alphabets. However, certain regulatory signals in messenger RNA (mRNA) act as combinations of codons. Yet no tool is available for visualization of conserved codon patterns. We present the first application which allows visualization of conserved regions in a multiple sequence alignment in the context of codons. CodonLogo is based on WebLogo3 and uses the same heuristics but treats codons as inseparable units of a 64-letter alphabet. CodonLogo can discriminate patterns of codon conservation from patterns of nucleotide conservation that appear indistinguishable in standard sequence logos. The CodonLogo source code and its implementation (in a local version of the Galaxy Browser) are available at http://recode.ucc.ie/CodonLogo and through the Galaxy Tool Shed at http://toolshed.g2.bx.psu.edu/.

  6. New computational methods reveal tRNA identity element divergence between Proteobacteria and Cyanobacteria.

    PubMed

    Freyhult, Eva; Cui, Yuanyuan; Nilsson, Olle; Ardell, David H

    2007-10-01

    There are at least 21 subfunctional classes of tRNAs in most cells that, despite a very highly conserved and compact common structure, must interact specifically with different cliques of proteins or cause grave organismal consequences. Protein recognition of specific tRNA substrates is achieved in part through class-restricted tRNA features called tRNA identity determinants. In earlier work we used TFAM, a statistical classifier of tRNA function, to show evidence of unexpectedly large diversity among bacteria in tRNA identity determinants. We also created a data reduction technique called function logos to visualize identity determinants for a given taxon. Here we show evidence that determinants for lysylated isoleucine tRNAs are not the same in Proteobacteria as in other bacterial groups including the Cyanobacteria. Consistent with this, the lysylating biosynthetic enzyme TilS lacks a C-terminal domain in Cyanobacteria that is present in Proteobacteria. We present here, using function logos, a map estimating all potential identity determinants generally operational in Cyanobacteria and Proteobacteria. To further isolate the differences in potential tRNA identity determinants between Proteobacteria and Cyanobacteria, we created two new data reduction visualizations to contrast sequence and function logos between two taxa. One, called Information Difference logos (ID logos), shows the evolutionary gain or retention of functional information associated to features in one lineage. The other, Kullback-Leibler divergence Difference logos (KLD logos), shows recruitments or shifts in the functional associations of features, especially those informative in both lineages. We used these new logos to specifically isolate and visualize the differences in potential tRNA identity determinants between Proteobacteria and Cyanobacteria. Our graphical results point to numerous differences in potential tRNA identity determinants between these groups. Although more differences in general are explained by shifts in functional association rather than gains or losses, the apparent identity differences in lysylated isoleucine tRNAs appear to have evolved through both mechanisms.

  7. From Voxels to Knowledge: A Practical Guide to the Segmentation of Complex Electron Microscopy 3D-Data

    PubMed Central

    Tsai, Wen-Ting; Hassan, Ahmed; Sarkar, Purbasha; Correa, Joaquin; Metlagel, Zoltan; Jorgens, Danielle M.; Auer, Manfred

    2014-01-01

    Modern 3D electron microscopy approaches have recently allowed unprecedented insight into the 3D ultrastructural organization of cells and tissues, enabling the visualization of large macromolecular machines, such as adhesion complexes, as well as higher-order structures, such as the cytoskeleton and cellular organelles in their respective cell and tissue context. Given the inherent complexity of cellular volumes, it is essential to first extract the features of interest in order to allow visualization, quantification, and therefore comprehension of their 3D organization. Each data set is defined by distinct characteristics, e.g., signal-to-noise ratio, crispness (sharpness) of the data, heterogeneity of its features, crowdedness of features, presence or absence of characteristic shapes that allow for easy identification, and the percentage of the entire volume that a specific region of interest occupies. All these characteristics need to be considered when deciding on which approach to take for segmentation. The six different 3D ultrastructural data sets presented were obtained by three different imaging approaches: resin embedded stained electron tomography, focused ion beam- and serial block face- scanning electron microscopy (FIB-SEM, SBF-SEM) of mildly stained and heavily stained samples, respectively. For these data sets, four different segmentation approaches have been applied: (1) fully manual model building followed solely by visualization of the model, (2) manual tracing segmentation of the data followed by surface rendering, (3) semi-automated approaches followed by surface rendering, or (4) automated custom-designed segmentation algorithms followed by surface rendering and quantitative analysis. Depending on the combination of data set characteristics, it was found that typically one of these four categorical approaches outperforms the others, but depending on the exact sequence of criteria, more than one approach may be successful. Based on these data, we propose a triage scheme that categorizes both objective data set characteristics and subjective personal criteria for the analysis of the different data sets. PMID:25145678

  8. Associative learning in baboons (Papio papio) and humans (Homo sapiens): species differences in learned attention to visual features.

    PubMed

    Fagot, J; Kruschke, J K; Dépy, D; Vauclair, J

    1998-10-01

    We examined attention shifting in baboons and humans during the learning of visual categories. Within a conditional matching-to-sample task, participants of the two species sequentially learned two two-feature categories which shared a common feature. Results showed that humans encoded both features of the initially learned category, but predominantly only the distinctive feature of the subsequently learned category. Although baboons initially encoded both features of the first category, they ultimately retained only the distinctive features of each category. Empirical data from the two species were analyzed with the 1996 ADIT connectionist model of Kruschke. ADIT fits the baboon data when the attentional shift rate is zero, and the human data when the attentional shift rate is not zero. These empirical and modeling results suggest species differences in learned attention to visual features.

  9. Learning to learn: From within-modality to cross-modality transfer during infancy.

    PubMed

    Hupp, Julie M; Sloutsky, Vladimir M

    2011-11-01

    One critical aspect of learning is the ability to apply learned knowledge to new situations. This ability to transfer is often limited, and its development is not well understood. The current research investigated the development of transfer between 8 and 16 months of age. In Experiment 1, 8- and 16-month-olds (who were established to have a preference to the beginning of a visual sequence) were trained to attend to the end of a sequence. They were then tested on novel visual sequences. Results indicated transfer of learning, with both groups changing baseline preferences as a result of training. In Experiment 2, participants were trained to attend to the end of a visual sequence and were then tested on an auditory sequence. Unlike Experiment 1, only older participants exhibited transfer of learning by changing baseline preferences. These findings suggest that the generalization of learning becomes broader with development, with transfer across modalities developing later than transfer within a modality. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Learning to Learn: From Within-Modality to Cross-Modality Transfer in Infancy

    PubMed Central

    Hupp, Julie M.; Sloutsky, Vladimir M.

    2011-01-01

    One critical aspect of learning is the ability to apply learned knowledge to new situations. This ability to transfer is often limited, and its development is not well understood. The current research investigated the development of transfer between 8- and 16-months of age. In Experiment 1, 8- and 16-month-olds (who were established to have a preference to the beginning of a visual sequence) were trained to attend to the end of a sequence. They were then tested on novel visual sequences. Results indicated transfer of learning, as both groups changed baseline preferences as a result of training. In Experiment 2, participants were trained to attend to the end of a visual sequence and then tested on an auditory sequence. Unlike Experiment 1, only older participants exhibited transfer of learning by changing baseline preferences. These findings suggest that the generalization of learning becomes broader with development, with transfer across modalities developing later than transfer within a modality. PMID:21663920

  11. Implications of differences of echoic and iconic memory for the design of multimodal displays

    NASA Astrophysics Data System (ADS)

    Glaser, Daniel Shields

    It has been well documented that dual-task performance is more accurate when each task is based on a different sensory modality. It is also well documented that the memory for each sense has unequal durations, particularly visual (iconic) and auditory (echoic) sensory memory. In this dissertation I address whether differences in sensory memory (e.g. iconic vs. echoic) duration have implications for the design of a multimodal display. Since echoic memory persists for seconds in contrast to iconic memory which persists only for milliseconds, one of my hypotheses was that in a visual-auditory dual task condition, performance will be better if the visual task is completed before the auditory task than vice versa. In Experiment 1 I investigated whether the ability to recall multi-modal stimuli is affected by recall order, with each mode being responded to separately. In Experiment 2, I investigated the effects of stimulus order and recall order on the ability to recall information from a multi-modal presentation. In Experiment 3 I investigated the effect of presentation order using a more realistic task. In Experiment 4 I investigated whether manipulating the presentation order of stimuli of different modalities improves humans' ability to combine the information from the two modalities in order to make decision based on pre-learned rules. As hypothesized, accuracy was greater when visual stimuli were responded to first and auditory stimuli second. Also as hypothesized, performance was improved by not presenting both sequences at the same time, limiting the perceptual load. Contrary to my expectations, overall performance was better when a visual sequence was presented before the audio sequence. Though presenting a visual sequence prior to an auditory sequence lengthens the visual retention interval, it also provides time for visual information to be recoded to a more robust form without disruption. Experiment 4 demonstrated that decision making requiring the integration of visual and auditory information is enhanced by reducing workload and promoting a strategic use of echoic memory. A framework for predicting Experiment 1-4 results is proposed and evaluated.

  12. Asymmetries in visual search for conjunctive targets.

    PubMed

    Cohen, A

    1993-08-01

    Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.

  13. Dynamical Analysis and Visualization of Tornadoes Time Series

    PubMed Central

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns. PMID:25790281

  14. Dynamical analysis and visualization of tornadoes time series.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  15. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  16. Preventing distraction: Assessing stimulus-specific and general effects of the predictive cueing of deviant auditory events

    PubMed Central

    Horváth, János; Sussman, Elyse; Winkler, István; Schröger, Erich

    2011-01-01

    Rare irregular sounds (deviants) embedded into a regular sound sequence have large potential to draw attention to themselves (distraction). It has been previously shown that distraction, as manifested by behavioral response delay, and the P3a and reorienting negativity (RON) event-related potentials, could be reduced when the forthcoming deviant was signaled by visual cues preceding the sounds. In the present study, we investigated the type of information used in the prevention of distraction by manipulating the information content of the visual cues preceding the sounds. Cues could signal the specific variant of the forthcoming deviant, or they could just signal that the next tone was a deviant. We found that stimulus-specific cue information was used in reducing distraction. The results also suggest that early P3a and RON index processes related to the specific deviating stimulus feature, whereas late P3a reflects a general distraction-related process. PMID:21310210

  17. The Ocean Gene Atlas: exploring the biogeography of plankton genes online.

    PubMed

    Villar, Emilie; Vannier, Thomas; Vernette, Caroline; Lescot, Magali; Cuenca, Miguelangel; Alexandre, Aurélien; Bachelerie, Paul; Rosnet, Thomas; Pelletier, Eric; Sunagawa, Shinichi; Hingamp, Pascal

    2018-05-21

    The Ocean Gene Atlas is a web service to explore the biogeography of genes from marine planktonic organisms. It allows users to query protein or nucleotide sequences against global ocean reference gene catalogs. With just one click, the abundance and location of target sequences are visualized on world maps as well as their taxonomic distribution. Interactive results panels allow for adjusting cutoffs for alignment quality and displaying the abundances of genes in the context of environmental features (temperature, nutrients, etc.) measured at the time of sampling. The ease of use enables non-bioinformaticians to explore quantitative and contextualized information on genes of interest in the global ocean ecosystem. Currently the Ocean Gene Atlas is deployed with (i) the Ocean Microbial Reference Gene Catalog (OM-RGC) comprising 40 million non-redundant mostly prokaryotic gene sequences associated with both Tara Oceans and Global Ocean Sampling (GOS) gene abundances and (ii) the Marine Atlas of Tara Ocean Unigenes (MATOU) composed of >116 million eukaryote unigenes. Additional datasets will be added upon availability of further marine environmental datasets that provide the required complement of sequence assemblies, raw reads and contextual environmental parameters. Ocean Gene Atlas is a freely-available web service at: http://tara-oceans.mio.osupytheas.fr/ocean-gene-atlas/.

  18. ConSurf 2016: an improved methodology to estimate and visualize evolutionary conservation in macromolecules

    PubMed Central

    Ashkenazy, Haim; Abadi, Shiran; Martz, Eric; Chay, Ofer; Mayrose, Itay; Pupko, Tal; Ben-Tal, Nir

    2016-01-01

    The degree of evolutionary conservation of an amino acid in a protein or a nucleic acid in DNA/RNA reflects a balance between its natural tendency to mutate and the overall need to retain the structural integrity and function of the macromolecule. The ConSurf web server (http://consurf.tau.ac.il), established over 15 years ago, analyses the evolutionary pattern of the amino/nucleic acids of the macromolecule to reveal regions that are important for structure and/or function. Starting from a query sequence or structure, the server automatically collects homologues, infers their multiple sequence alignment and reconstructs a phylogenetic tree that reflects their evolutionary relations. These data are then used, within a probabilistic framework, to estimate the evolutionary rates of each sequence position. Here we introduce several new features into ConSurf, including automatic selection of the best evolutionary model used to infer the rates, the ability to homology-model query proteins, prediction of the secondary structure of query RNA molecules from sequence, the ability to view the biological assembly of a query (in addition to the single chain), mapping of the conservation grades onto 2D RNA models and an advanced view of the phylogenetic tree that enables interactively rerunning ConSurf with the taxa of a sub-tree. PMID:27166375

  19. LCR-eXXXplorer: a web platform to search, visualize and share data for low complexity regions in protein sequences.

    PubMed

    Kirmitzoglou, Ioannis; Promponas, Vasilis J

    2015-07-01

    Local compositionally biased and low complexity regions (LCRs) in amino acid sequences have initially attracted the interest of researchers due to their implication in generating artifacts in sequence database searches. There is accumulating evidence of the biological significance of LCRs both in physiological and in pathological situations. Nonetheless, LCR-related algorithms and tools have not gained wide appreciation across the research community, partly due to the fact that only a handful of user-friendly software is currently freely available. We developed LCR-eXXXplorer, an extensible online platform attempting to fill this gap. LCR-eXXXplorer offers tools for displaying LCRs from the UniProt/SwissProt knowledgebase, in combination with other relevant protein features, predicted or experimentally verified. Moreover, users may perform powerful queries against a custom designed sequence/LCR-centric database. We anticipate that LCR-eXXXplorer will be a useful starting point in research efforts for the elucidation of the structure, function and evolution of proteins with LCRs. LCR-eXXXplorer is freely available at the URL http://repeat.biol.ucy.ac.cy/lcr-exxxplorer. vprobon@ucy.ac.cy Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  20. Red to Green or Fast to Slow? Infants' Visual Working Memory for "Just Salient Differences"

    ERIC Educational Resources Information Center

    Kaldy, Zsuzsa; Blaser, Erik

    2013-01-01

    In this study, 6-month-old infants' visual working memory for a static feature (color) and a dynamic feature (rotational motion) was compared. Comparing infants' use of different features can only be done properly if experimental manipulations to those features are equally salient (Kaldy & Blaser, 2009; Kaldy, Blaser, & Leslie,…

  1. Implicit integration in a case of integrative visual agnosia.

    PubMed

    Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo

    2007-05-15

    We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.

  2. Learning of Grammar-Like Visual Sequences by Adults with and without Language-Learning Disabilities

    ERIC Educational Resources Information Center

    Aguilar, Jessica M.; Plante, Elena

    2014-01-01

    Purpose: Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. Method: In Study 1,…

  3. ActiviTree: interactive visual exploration of sequences in event-based data using graph similarity.

    PubMed

    Vrotsou, Katerina; Johansson, Jimmy; Cooper, Matthew

    2009-01-01

    The identification of significant sequences in large and complex event-based temporal data is a challenging problem with applications in many areas of today's information intensive society. Pure visual representations can be used for the analysis, but are constrained to small data sets. Algorithmic search mechanisms used for larger data sets become expensive as the data size increases and typically focus on frequency of occurrence to reduce the computational complexity, often overlooking important infrequent sequences and outliers. In this paper we introduce an interactive visual data mining approach based on an adaptation of techniques developed for web searching, combined with an intuitive visual interface, to facilitate user-centred exploration of the data and identification of sequences significant to that user. The search algorithm used in the exploration executes in negligible time, even for large data, and so no pre-processing of the selected data is required, making this a completely interactive experience for the user. Our particular application area is social science diary data but the technique is applicable across many other disciplines.

  4. Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish

    PubMed Central

    Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian

    2011-01-01

    Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793

  5. Recapitulating phylogenies using k-mers: from trees to networks.

    PubMed

    Bernard, Guillaume; Ragan, Mark A; Chan, Cheong Xin

    2016-01-01

    Ernst Haeckel based his landmark Tree of Life on the supposed ontogenic recapitulation of phylogeny, i.e. that successive embryonic stages during the development of an organism re-trace the morphological forms of its ancestors over the course of evolution. Much of this idea has since been discredited. Today, phylogenies are often based on families of molecular sequences. The standard approach starts with a multiple sequence alignment, in which the sequences are arranged relative to each other in a way that maximises a measure of similarity position-by-position along their entire length. A tree (or sometimes a network) is then inferred. Rigorous multiple sequence alignment is computationally demanding, and evolutionary processes that shape the genomes of many microbes (bacteria, archaea and some morphologically simple eukaryotes) can add further complications. In particular, recombination, genome rearrangement and lateral genetic transfer undermine the assumptions that underlie multiple sequence alignment, and imply that a tree-like structure may be too simplistic. Here, using genome sequences of 143 bacterial and archaeal genomes, we construct a network of phylogenetic relatedness based on the number of shared k -mers (subsequences at fixed length k ). Our findings suggest that the network captures not only key aspects of microbial genome evolution as inferred from a tree, but also features that are not treelike. The method is highly scalable, allowing for investigation of genome evolution across a large number of genomes. Instead of using specific regions or sequences from genome sequences, or indeed Haeckel's idea of ontogeny, we argue that genome phylogenies can be inferred using k -mers from whole-genome sequences. Representing these networks dynamically allows biological questions of interest to be formulated and addressed quickly and in a visually intuitive manner.

  6. An integrated SNP mining and utilization (ISMU) pipeline for next generation sequencing data.

    PubMed

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A V S K; Varshney, Rajeev K

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone free software.

  7. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  8. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  9. Topic Transition in Educational Videos Using Visually Salient Words

    ERIC Educational Resources Information Center

    Gandhi, Ankit; Biswas, Arijit; Deshmukh, Om

    2015-01-01

    In this paper, we propose a visual saliency algorithm for automatically finding the topic transition points in an educational video. First, we propose a method for assigning a saliency score to each word extracted from an educational video. We design several mid-level features that are indicative of visual saliency. The optimal feature combination…

  10. Visual word ambiguity.

    PubMed

    van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark

    2010-07-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

  11. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  12. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  13. Threat as a feature in visual semantic object memory.

    PubMed

    Calley, Clifford S; Motes, Michael A; Chiang, H-Sheng; Buhl, Virginia; Spence, Jeffrey S; Abdi, Hervé; Anand, Raksha; Maguire, Mandy; Estevez, Leonardo; Briggs, Richard; Freeman, Thomas; Kraut, Michael A; Hart, John

    2013-08-01

    Threatening stimuli have been found to modulate visual processes related to perception and attention. The present functional magnetic resonance imaging (fMRI) study investigated whether threat modulates visual object recognition of man-made and naturally occurring categories of stimuli. Compared with nonthreatening pictures, threatening pictures of real items elicited larger fMRI BOLD signal changes in medial visual cortices extending inferiorly into the temporo-occipital (TO) "what" pathways. This region elicited greater signal changes for threatening items compared to nonthreatening from both the natural-occurring and man-made stimulus supraordinate categories, demonstrating a featural component to these visual processing areas. Two additional loci of signal changes within more lateral inferior TO areas (bilateral BA18 and 19 as well as the right ventral temporal lobe) were detected for a category-feature interaction, with stronger responses to man-made (category) threatening (feature) stimuli than to natural threats. The findings are discussed in terms of visual recognition of processing efficiently or rapidly groups of items that confer an advantage for survival. Copyright © 2012 Wiley Periodicals, Inc.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berres, Anne Sabine

    This slide presentation describes basic topological concepts, including topological spaces, homeomorphisms, homotopy, betti numbers. Scalar field topology explores finding topological features and scalar field visualization, and vector field topology explores finding topological features and vector field visualization.

  15. Multicolor microRNA FISH effectively differentiates tumor types

    PubMed Central

    Renwick, Neil; Cekan, Pavol; Masry, Paul A.; McGeary, Sean E.; Miller, Jason B.; Hafner, Markus; Li, Zhen; Mihailovic, Aleksandra; Morozov, Pavel; Brown, Miguel; Gogakos, Tasos; Mobin, Mehrpouya B.; Snorrason, Einar L.; Feilotter, Harriet E.; Zhang, Xiao; Perlis, Clifford S.; Wu, Hong; Suárez-Fariñas, Mayte; Feng, Huichen; Shuda, Masahiro; Moore, Patrick S.; Tron, Victor A.; Chang, Yuan; Tuschl, Thomas

    2013-01-01

    MicroRNAs (miRNAs) are excellent tumor biomarkers because of their cell-type specificity and abundance. However, many miRNA detection methods, such as real-time PCR, obliterate valuable visuospatial information in tissue samples. To enable miRNA visualization in formalin-fixed paraffin-embedded (FFPE) tissues, we developed multicolor miRNA FISH. As a proof of concept, we used this method to differentiate two skin tumors, basal cell carcinoma (BCC) and Merkel cell carcinoma (MCC), with overlapping histologic features but distinct cellular origins. Using sequencing-based miRNA profiling and discriminant analysis, we identified the tumor-specific miRNAs miR-205 and miR-375 in BCC and MCC, respectively. We addressed three major shortcomings in miRNA FISH, identifying optimal conditions for miRNA fixation and ribosomal RNA (rRNA) retention using model compounds and high-pressure liquid chromatography (HPLC) analyses, enhancing signal amplification and detection by increasing probe-hapten linker lengths, and improving probe specificity using shortened probes with minimal rRNA sequence complementarity. We validated our method on 4 BCC and 12 MCC tumors. Amplified miR-205 and miR-375 signals were normalized against directly detectable reference rRNA signals. Tumors were classified using predefined cutoff values, and all were correctly identified in blinded analysis. Our study establishes a reliable miRNA FISH technique for parallel visualization of differentially expressed miRNAs in FFPE tumor tissues. PMID:23728175

  16. QuadBase2: web server for multiplexed guanine quadruplex mining and visualization

    PubMed Central

    Dhapola, Parashar; Chowdhury, Shantanu

    2016-01-01

    DNA guanine quadruplexes or G4s are non-canonical DNA secondary structures which affect genomic processes like replication, transcription and recombination. G4s are computationally identified by specific nucleotide motifs which are also called putative G4 (PG4) motifs. Despite the general relevance of these structures, there is currently no tool available that can allow batch queries and genome-wide analysis of these motifs in a user-friendly interface. QuadBase2 (quadbase.igib.res.in) presents a completely reinvented web server version of previously published QuadBase database. QuadBase2 enables users to mine PG4 motifs in up to 178 eukaryotes through the EuQuad module. This module interfaces with Ensembl Compara database, to allow users mine PG4 motifs in the orthologues of genes of interest across eukaryotes. PG4 motifs can be mined across genes and their promoter sequences in 1719 prokaryotes through ProQuad module. This module includes a feature that allows genome-wide mining of PG4 motifs and their visualization as circular histograms. TetraplexFinder, the module for mining PG4 motifs in user-provided sequences is now capable of handling up to 20 MB of data. QuadBase2 is a comprehensive PG4 motif mining tool that further expands the configurations and algorithms for mining PG4 motifs in a user-friendly way. PMID:27185890

  17. Exploration of complex visual feature spaces for object perception

    PubMed Central

    Leeds, Daniel D.; Pyles, John A.; Tarr, Michael J.

    2014-01-01

    The mid- and high-level visual properties supporting object perception in the ventral visual pathway are poorly understood. In the absence of well-specified theory, many groups have adopted a data-driven approach in which they progressively interrogate neural units to establish each unit's selectivity. Such methods are challenging in that they require search through a wide space of feature models and stimuli using a limited number of samples. To more rapidly identify higher-level features underlying human cortical object perception, we implemented a novel functional magnetic resonance imaging method in which visual stimuli are selected in real-time based on BOLD responses to recently shown stimuli. This work was inspired by earlier primate physiology work, in which neural selectivity for mid-level features in IT was characterized using a simple parametric approach (Hung et al., 2012). To extend such work to human neuroimaging, we used natural and synthetic object stimuli embedded in feature spaces constructed on the basis of the complex visual properties of the objects themselves. During fMRI scanning, we employed a real-time search method to control continuous stimulus selection within each image space. This search was designed to maximize neural responses across a pre-determined 1 cm3 brain region within ventral cortex. To assess the value of this method for understanding object encoding, we examined both the behavior of the method itself and the complex visual properties the method identified as reliably activating selected brain regions. We observed: (1) Regions selective for both holistic and component object features and for a variety of surface properties; (2) Object stimulus pairs near one another in feature space that produce responses at the opposite extremes of the measured activity range. Together, these results suggest that real-time fMRI methods may yield more widely informative measures of selectivity within the broad classes of visual features associated with cortical object representation. PMID:25309408

  18. MethVisual - visualization and exploratory statistical analysis of DNA methylation profiles from bisulfite sequencing.

    PubMed

    Zackay, Arie; Steinhoff, Christine

    2010-12-15

    Exploration of DNA methylation and its impact on various regulatory mechanisms has become a very active field of research. Simultaneously there is an arising need for tools to process and analyse the data together with statistical investigation and visualisation. MethVisual is a new application that enables exploratory analysis and intuitive visualization of DNA methylation data as is typically generated by bisulfite sequencing. The package allows the import of DNA methylation sequences, aligns them and performs quality control comparison. It comprises basic analysis steps as lollipop visualization, co-occurrence display of methylation of neighbouring and distant CpG sites, summary statistics on methylation status, clustering and correspondence analysis. The package has been developed for methylation data but can be also used for other data types for which binary coding can be inferred. The application of the package, as well as a comparison to existing DNA methylation analysis tools and its workflow based on two datasets is presented in this paper. The R package MethVisual offers various analysis procedures for data that can be binarized, in particular for bisulfite sequenced methylation data. R/Bioconductor has become one of the most important environments for statistical analysis of various types of biological and medical data. Therefore, any data analysis within R that allows the integration of various data types as provided from different technological platforms is convenient. It is the first and so far the only specific package for DNA methylation analysis, in particular for bisulfite sequenced data available in R/Bioconductor enviroment. The package is available for free at http://methvisual.molgen.mpg.de/ and from the Bioconductor Consortium http://www.bioconductor.org.

  19. MethVisual - visualization and exploratory statistical analysis of DNA methylation profiles from bisulfite sequencing

    PubMed Central

    2010-01-01

    Background Exploration of DNA methylation and its impact on various regulatory mechanisms has become a very active field of research. Simultaneously there is an arising need for tools to process and analyse the data together with statistical investigation and visualisation. Findings MethVisual is a new application that enables exploratory analysis and intuitive visualization of DNA methylation data as is typically generated by bisulfite sequencing. The package allows the import of DNA methylation sequences, aligns them and performs quality control comparison. It comprises basic analysis steps as lollipop visualization, co-occurrence display of methylation of neighbouring and distant CpG sites, summary statistics on methylation status, clustering and correspondence analysis. The package has been developed for methylation data but can be also used for other data types for which binary coding can be inferred. The application of the package, as well as a comparison to existing DNA methylation analysis tools and its workflow based on two datasets is presented in this paper. Conclusions The R package MethVisual offers various analysis procedures for data that can be binarized, in particular for bisulfite sequenced methylation data. R/Bioconductor has become one of the most important environments for statistical analysis of various types of biological and medical data. Therefore, any data analysis within R that allows the integration of various data types as provided from different technological platforms is convenient. It is the first and so far the only specific package for DNA methylation analysis, in particular for bisulfite sequenced data available in R/Bioconductor enviroment. The package is available for free at http://methvisual.molgen.mpg.de/ and from the Bioconductor Consortium http://www.bioconductor.org. PMID:21159174

  20. Properties of V1 Neurons Tuned to Conjunctions of Visual Features: Application of the V1 Saliency Hypothesis to Visual Search behavior

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target. PMID:22719829

  1. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    PubMed

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  2. Specific excitatory connectivity for feature integration in mouse primary visual cortex

    PubMed Central

    Molina-Luna, Patricia; Roth, Morgane M.

    2017-01-01

    Local excitatory connections in mouse primary visual cortex (V1) are stronger and more prevalent between neurons that share similar functional response features. However, the details of how functional rules for local connectivity shape neuronal responses in V1 remain unknown. We hypothesised that complex responses to visual stimuli may arise as a consequence of rules for selective excitatory connectivity within the local network in the superficial layers of mouse V1. In mouse V1 many neurons respond to overlapping grating stimuli (plaid stimuli) with highly selective and facilitatory responses, which are not simply predicted by responses to single gratings presented alone. This complexity is surprising, since excitatory neurons in V1 are considered to be mainly tuned to single preferred orientations. Here we examined the consequences for visual processing of two alternative connectivity schemes: in the first case, local connections are aligned with visual properties inherited from feedforward input (a ‘like-to-like’ scheme specifically connecting neurons that share similar preferred orientations); in the second case, local connections group neurons into excitatory subnetworks that combine and amplify multiple feedforward visual properties (a ‘feature binding’ scheme). By comparing predictions from large scale computational models with in vivo recordings of visual representations in mouse V1, we found that responses to plaid stimuli were best explained by assuming feature binding connectivity. Unlike under the like-to-like scheme, selective amplification within feature-binding excitatory subnetworks replicated experimentally observed facilitatory responses to plaid stimuli; explained selective plaid responses not predicted by grating selectivity; and was consistent with broad anatomical selectivity observed in mouse V1. Our results show that visual feature binding can occur through local recurrent mechanisms without requiring feedforward convergence, and that such a mechanism is consistent with visual responses and cortical anatomy in mouse V1. PMID:29240769

  3. BiQ Analyzer HT: locus-specific analysis of DNA methylation by high-throughput bisulfite sequencing

    PubMed Central

    Lutsik, Pavlo; Feuerbach, Lars; Arand, Julia; Lengauer, Thomas; Walter, Jörn; Bock, Christoph

    2011-01-01

    Bisulfite sequencing is a widely used method for measuring DNA methylation in eukaryotic genomes. The assay provides single-base pair resolution and, given sufficient sequencing depth, its quantitative accuracy is excellent. High-throughput sequencing of bisulfite-converted DNA can be applied either genome wide or targeted to a defined set of genomic loci (e.g. using locus-specific PCR primers or DNA capture probes). Here, we describe BiQ Analyzer HT (http://biq-analyzer-ht.bioinf.mpi-inf.mpg.de/), a user-friendly software tool that supports locus-specific analysis and visualization of high-throughput bisulfite sequencing data. The software facilitates the shift from time-consuming clonal bisulfite sequencing to the more quantitative and cost-efficient use of high-throughput sequencing for studying locus-specific DNA methylation patterns. In addition, it is useful for locus-specific visualization of genome-wide bisulfite sequencing data. PMID:21565797

  4. Modulation of neuronal responses during covert search for visual feature conjunctions

    PubMed Central

    Buracas, Giedrius T.; Albright, Thomas D.

    2009-01-01

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385

  5. Modulation of neuronal responses during covert search for visual feature conjunctions.

    PubMed

    Buracas, Giedrius T; Albright, Thomas D

    2009-09-29

    While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.

  6. The magnificent outburst of the 2016 Perseids, the analyses

    NASA Astrophysics Data System (ADS)

    Miskotte, Koen; Vandeputte, Michel

    2017-03-01

    Enhanced Perseid activity had been predicted for 2016 as a result of a sequence of encounters with some dust trails as well as the effect of perturbations by Jupiter which made Earth crossing the main stream deeper through more dense regions. Visual observations resulted in a detailed activity profile and population index profile, the observed features in these profiles could be matched with the predicted passages through the different dust trails. The 4 Rev (1479) dust trail in particular produced a distinct peak while the 7 Rev (1079) dust trail remained rather at a somehow disappointing low level. The traditional annual Perseid maximum displayed enhanced activity due to the 12 Rev (441) dust trail.

  7. Mitosis-associated repression in development.

    PubMed

    Esposito, Emilia; Lim, Bomyi; Guessous, Ghita; Falahati, Hanieh; Levine, Michael

    2016-07-01

    Transcriptional repression is a pervasive feature of animal development. Here, we employ live-imaging methods to visualize the Snail repressor, which establishes the boundary between the presumptive mesoderm and neurogenic ectoderm of early Drosophila embryos. Snail target enhancers were attached to an MS2 reporter gene, permitting detection of nascent transcripts in living embryos. The transgenes exhibit initially broad patterns of transcription but are refined by repression in the mesoderm following mitosis. These observations reveal a correlation between mitotic silencing and Snail repression. We propose that mitosis and other inherent discontinuities in transcription boost the activities of sequence-specific repressors, such as Snail. © 2016 Esposito et al.; Published by Cold Spring Harbor Laboratory Press.

  8. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  9. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  10. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  11. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  12. pV3-Gold Visualization Environment for Computer Simulations

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa L.

    1997-01-01

    A new visualization environment, pV3-Gold, can be used during and after a computer simulation to extract and visualize the physical features in the results. This environment, which is an extension of the pV3 visualization environment developed at the Massachusetts Institute of Technology with guidance and support by researchers at the NASA Lewis Research Center, features many tools that allow users to display data in various ways.

  13. Exploring neighborhoods in the metagenome universe.

    PubMed

    Aßhauer, Kathrin P; Klingenberg, Heiner; Lingner, Thomas; Meinicke, Peter

    2014-07-14

    The variety of metagenomes in current databases provides a rapidly growing source of information for comparative studies. However, the quantity and quality of supplementary metadata is still lagging behind. It is therefore important to be able to identify related metagenomes by means of the available sequence data alone. We have studied efficient sequence-based methods for large-scale identification of similar metagenomes within a database retrieval context. In a broad comparison of different profiling methods we found that vector-based distance measures are well-suitable for the detection of metagenomic neighbors. Our evaluation on more than 1700 publicly available metagenomes indicates that for a query metagenome from a particular habitat on average nine out of ten nearest neighbors represent the same habitat category independent of the utilized profiling method or distance measure. While for well-defined labels a neighborhood accuracy of 100% can be achieved, in general the neighbor detection is severely affected by a natural overlap of manually annotated categories. In addition, we present results of a novel visualization method that is able to reflect the similarity of metagenomes in a 2D scatter plot. The visualization method shows a similarly high accuracy in the reduced space as compared with the high-dimensional profile space. Our study suggests that for inspection of metagenome neighborhoods the profiling methods and distance measures can be chosen to provide a convenient interpretation of results in terms of the underlying features. Furthermore, supplementary metadata of metagenome samples in the future needs to comply with readily available ontologies for fine-grained and standardized annotation. To make profile-based k-nearest-neighbor search and the 2D-visualization of the metagenome universe available to the research community, we included the proposed methods in our CoMet-Universe server for comparative metagenome analysis.

  14. Exploring Neighborhoods in the Metagenome Universe

    PubMed Central

    Aßhauer, Kathrin P.; Klingenberg, Heiner; Lingner, Thomas; Meinicke, Peter

    2014-01-01

    The variety of metagenomes in current databases provides a rapidly growing source of information for comparative studies. However, the quantity and quality of supplementary metadata is still lagging behind. It is therefore important to be able to identify related metagenomes by means of the available sequence data alone. We have studied efficient sequence-based methods for large-scale identification of similar metagenomes within a database retrieval context. In a broad comparison of different profiling methods we found that vector-based distance measures are well-suitable for the detection of metagenomic neighbors. Our evaluation on more than 1700 publicly available metagenomes indicates that for a query metagenome from a particular habitat on average nine out of ten nearest neighbors represent the same habitat category independent of the utilized profiling method or distance measure. While for well-defined labels a neighborhood accuracy of 100% can be achieved, in general the neighbor detection is severely affected by a natural overlap of manually annotated categories. In addition, we present results of a novel visualization method that is able to reflect the similarity of metagenomes in a 2D scatter plot. The visualization method shows a similarly high accuracy in the reduced space as compared with the high-dimensional profile space. Our study suggests that for inspection of metagenome neighborhoods the profiling methods and distance measures can be chosen to provide a convenient interpretation of results in terms of the underlying features. Furthermore, supplementary metadata of metagenome samples in the future needs to comply with readily available ontologies for fine-grained and standardized annotation. To make profile-based k-nearest-neighbor search and the 2D-visualization of the metagenome universe available to the research community, we included the proposed methods in our CoMet-Universe server for comparative metagenome analysis. PMID:25026170

  15. SeeGH--a software tool for visualization of whole genome array comparative genomic hybridization data.

    PubMed

    Chi, Bryan; DeLeeuw, Ronald J; Coe, Bradley P; MacAulay, Calum; Lam, Wan L

    2004-02-09

    Array comparative genomic hybridization (CGH) is a technique which detects copy number differences in DNA segments. Complete sequencing of the human genome and the development of an array representing a tiling set of tens of thousands of DNA segments spanning the entire human genome has made high resolution copy number analysis throughout the genome possible. Since array CGH provides signal ratio for each DNA segment, visualization would require the reassembly of individual data points into chromosome profiles. We have developed a visualization tool for displaying whole genome array CGH data in the context of chromosomal location. SeeGH is an application that translates spot signal ratio data from array CGH experiments to displays of high resolution chromosome profiles. Data is imported from a simple tab delimited text file obtained from standard microarray image analysis software. SeeGH processes the signal ratio data and graphically displays it in a conventional CGH karyotype diagram with the added features of magnification and DNA segment annotation. In this process, SeeGH imports the data into a database, calculates the average ratio and standard deviation for each replicate spot, and links them to chromosome regions for graphical display. Once the data is displayed, users have the option of hiding or flagging DNA segments based on user defined criteria, and retrieve annotation information such as clone name, NCBI sequence accession number, ratio, base pair position on the chromosome, and standard deviation. SeeGH represents a novel software tool used to view and analyze array CGH data. The software gives users the ability to view the data in an overall genomic view as well as magnify specific chromosomal regions facilitating the precise localization of genetic alterations. SeeGH is easily installed and runs on Microsoft Windows 2000 or later environments.

  16. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    PubMed

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  17. Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream

    PubMed Central

    Egner, Tobias; Monti, Jim M.; Summerfield, Christopher

    2014-01-01

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999

  18. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation.

    PubMed

    Bolleman, Jerven T; Mungall, Christopher J; Strozzi, Francesco; Baran, Joachim; Dumontier, Michel; Bonnal, Raoul J P; Buels, Robert; Hoehndorf, Robert; Fujisawa, Takatomo; Katayama, Toshiaki; Cock, Peter J A

    2016-06-13

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. We have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned "omics" areas. Using the same data format to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe - and potentially merge - sequence annotations from multiple sources. Data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.

  19. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation

    DOE PAGES

    Bolleman, Jerven T.; Mungall, Christopher J.; Strozzi, Francesco; ...

    2016-06-13

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. In this paper, we have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned “omics” areas. Using the same data formatmore » to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe – and potentially merge – sequence annotations from multiple sources. Finally, data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.« less

  20. FALDO: a semantic standard for describing the location of nucleotide and protein feature annotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolleman, Jerven T.; Mungall, Christopher J.; Strozzi, Francesco

    Nucleotide and protein sequence feature annotations are essential to understand biology on the genomic, transcriptomic, and proteomic level. Using Semantic Web technologies to query biological annotations, there was no standard that described this potentially complex location information as subject-predicate-object triples. In this paper, we have developed an ontology, the Feature Annotation Location Description Ontology (FALDO), to describe the positions of annotated features on linear and circular sequences. FALDO can be used to describe nucleotide features in sequence records, protein annotations, and glycan binding sites, among other features in coordinate systems of the aforementioned “omics” areas. Using the same data formatmore » to represent sequence positions that are independent of file formats allows us to integrate sequence data from multiple sources and data types. The genome browser JBrowse is used to demonstrate accessing multiple SPARQL endpoints to display genomic feature annotations, as well as protein annotations from UniProt mapped to genomic locations. Our ontology allows users to uniformly describe – and potentially merge – sequence annotations from multiple sources. Finally, data sources using FALDO can prospectively be retrieved using federalised SPARQL queries against public SPARQL endpoints and/or local private triple stores.« less

  1. Automatic topics segmentation for TV news video

    NASA Astrophysics Data System (ADS)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  2. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  3. Conjunctive Coding of Complex Object Features

    PubMed Central

    Erez, Jonathan; Cusack, Rhodri; Kendall, William; Barense, Morgan D.

    2016-01-01

    Critical to perceiving an object is the ability to bind its constituent features into a cohesive representation, yet the manner by which the visual system integrates object features to yield a unified percept remains unknown. Here, we present a novel application of multivoxel pattern analysis of neuroimaging data that allows a direct investigation of whether neural representations integrate object features into a whole that is different from the sum of its parts. We found that patterns of activity throughout the ventral visual stream (VVS), extending anteriorly into the perirhinal cortex (PRC), discriminated between the same features combined into different objects. Despite this sensitivity to the unique conjunctions of features comprising objects, activity in regions of the VVS, again extending into the PRC, was invariant to the viewpoints from which the conjunctions were presented. These results suggest that the manner in which our visual system processes complex objects depends on the explicit coding of the conjunctions of features comprising them. PMID:25921583

  4. Temporality of Features in Near-Death Experience Narratives

    PubMed Central

    Martial, Charlotte; Cassol, Héléna; Antonopoulos, Georgios; Charlier, Thomas; Heros, Julien; Donneau, Anne-Françoise; Charland-Verville, Vanessa; Laureys, Steven

    2017-01-01

    Background: After an occurrence of a Near-Death Experience (NDE), Near-Death Experiencers (NDErs) usually report extremely rich and detailed narratives. Phenomenologically, a NDE can be described as a set of distinguishable features. Some authors have proposed regular patterns of NDEs, however, the actual temporality sequence of NDE core features remains a little explored area. Objectives: The aim of the present study was to investigate the frequency distribution of these features (globally and according to the position of features in narratives) as well as the most frequently reported temporality sequences of features. Methods: We collected 154 French freely expressed written NDE narratives (i.e., Greyson NDE scale total score ≥ 7/32). A text analysis was conducted on all narratives in order to infer temporal ordering and frequency distribution of NDE features. Results: Our analyses highlighted the following most frequently reported sequence of consecutive NDE features: Out-of-Body Experience, Experiencing a tunnel, Seeing a bright light, Feeling of peace. Yet, this sequence was encountered in a very limited number of NDErs. Conclusion: These findings may suggest that NDEs temporality sequences can vary across NDErs. Exploring associations and relationships among features encountered during NDEs may complete the rigorous definition and scientific comprehension of the phenomenon. PMID:28659779

  5. Temporality of Features in Near-Death Experience Narratives.

    PubMed

    Martial, Charlotte; Cassol, Héléna; Antonopoulos, Georgios; Charlier, Thomas; Heros, Julien; Donneau, Anne-Françoise; Charland-Verville, Vanessa; Laureys, Steven

    2017-01-01

    Background: After an occurrence of a Near-Death Experience (NDE), Near-Death Experiencers (NDErs) usually report extremely rich and detailed narratives. Phenomenologically, a NDE can be described as a set of distinguishable features. Some authors have proposed regular patterns of NDEs, however, the actual temporality sequence of NDE core features remains a little explored area. Objectives: The aim of the present study was to investigate the frequency distribution of these features (globally and according to the position of features in narratives) as well as the most frequently reported temporality sequences of features. Methods: We collected 154 French freely expressed written NDE narratives (i.e., Greyson NDE scale total score ≥ 7/32). A text analysis was conducted on all narratives in order to infer temporal ordering and frequency distribution of NDE features. Results: Our analyses highlighted the following most frequently reported sequence of consecutive NDE features: Out-of-Body Experience, Experiencing a tunnel, Seeing a bright light, Feeling of peace. Yet, this sequence was encountered in a very limited number of NDErs. Conclusion: These findings may suggest that NDEs temporality sequences can vary across NDErs. Exploring associations and relationships among features encountered during NDEs may complete the rigorous definition and scientific comprehension of the phenomenon.

  6. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification

    PubMed Central

    Wu, Lin

    2017-01-01

    With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories. PMID:28706534

  8. Method for the reduction of image content redundancy in large image databases

    DOEpatents

    Tobin, Kenneth William; Karnowski, Thomas P.

    2010-03-02

    A method of increasing information content for content-based image retrieval (CBIR) systems includes the steps of providing a CBIR database, the database having an index for a plurality of stored digital images using a plurality of feature vectors, the feature vectors corresponding to distinct descriptive characteristics of the images. A visual similarity parameter value is calculated based on a degree of visual similarity between features vectors of an incoming image being considered for entry into the database and feature vectors associated with a most similar of the stored images. Based on said visual similarity parameter value it is determined whether to store or how long to store the feature vectors associated with the incoming image in the database.

  9. Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition

    PubMed Central

    Wang, Xin; Deng, Zhongliang

    2017-01-01

    In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. PMID:28677635

  10. The effects of auditory and visual cues on timing synchronicity for robotic rehabilitation.

    PubMed

    English, Brittney A; Howard, Ayanna M

    2017-07-01

    In this paper, we explore how the integration of auditory and visual cues can help teach the timing of motor skills for the purpose of motor function rehabilitation. We conducted a study using Amazon's Mechanical Turk in which 106 participants played a virtual therapy game requiring wrist movements. To validate that our results would translate to trends that could also be observed during robotic rehabilitation sessions, we recreated this experiment with 11 participants using a robotic wrist rehabilitation system as means to control the therapy game. During interaction with the therapy game, users were asked to learn and reconstruct a tapping sequence as defined by musical notes flashing on the screen. Participants were divided into 2 test groups: (1) control: participants only received visual cues to prompt them on the timing sequence, and (2) experimental: participants received both visual and auditory cues to prompt them on the timing sequence. To evaluate performance, the timing and length of the sequence were measured. Performance was determined by calculating the number of trials needed before the participant was able to master the specific aspect of the timing task. In the virtual experiment, the group that received visual and auditory cues was able to master all aspects of the timing task faster than the visual cue only group with p-values < 0.05. This trend was also verified for participants using the robotic arm exoskeleton in the physical experiment.

  11. Spatial Sequences, but Not Verbal Sequences, Are Vulnerable to General Interference during Retention in Working Memory

    ERIC Educational Resources Information Center

    Morey, Candice C.; Miron, Monica D.

    2016-01-01

    Among models of working memory, there is not yet a consensus about how to describe functions specific to storing verbal or visual-spatial memories. We presented aural-verbal and visual-spatial lists simultaneously and sometimes cued one type of information after presentation, comparing accuracy in conditions with and without informative…

  12. Isolation of a hyperthermophilic archaeum predicted by in situ RNA analysis.

    PubMed

    Huber, R; Burggraf, S; Mayer, T; Barns, S M; Rossnagel, P; Stetter, K O

    1995-07-06

    A variety of hyperthermophilic bacteria and archaea have been isolated from high-temperature environments by plating and serial dilutions. However, these techniques allow only the small percentage of organisms able to form colonies, or those that are predominant within environmental samples, to be obtained in pure culture. Recently, in situ 16S ribosomal RNA analyses of samples from the Obsidian hot pool at Yellowstone National Park, Wyoming, revealed a variety of archaeal sequences, which were all different from those of previously isolated species. This suggests substantial diversity of archaea with so far unknown morphological, physiological and biochemical features, which may play an important part within high-temperature ecosystems. Here we describe a procedure to obtain pure cultures of unknown organisms harbouring specific 16S rRNA sequences identified previously within the environment. It combines visual recognition of single cells by phylogenetic staining and cloning by 'optical tweezers'. Our result validates polymerase chain reaction data on the existence of large archael communities.

  13. Android application for handwriting segmentation using PerTOHS theory

    NASA Astrophysics Data System (ADS)

    Akouaydi, Hanen; Njah, Sourour; Alimi, Adel M.

    2017-03-01

    The paper handles the problem of segmentation of handwriting on mobile devices. Many applications have been developed in order to facilitate the recognition of handwriting and to skip the limited numbers of keys in keyboards and try to introduce a space of drawing for writing instead of using keyboards. In this one, we will present a mobile theory for the segmentation of for handwriting uses PerTOHS theory, Perceptual Theory of On line Handwriting Segmentation, where handwriting is defined as a sequence of elementary and perceptual codes. In fact, the theory analyzes the written script and tries to learn the handwriting visual codes features in order to generate new ones via the generated perceptual sequences. To get this classification we try to apply the Beta-elliptic model, fuzzy detector and also genetic algorithms in order to get the EPCs (Elementary Perceptual Codes) and GPCs (Global Perceptual Codes) that composed the script. So, we will present our Android application M-PerTOHS for segmentation of handwriting.

  14. Visual Features Involving Motion Seen from Airport Control Towers

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Liston, Dorion

    2010-01-01

    Visual motion cues are used by tower controllers to support both visual and anticipated separation. Some of these cues are tabulated as part of the overall set of visual features used in towers to separate aircraft. An initial analyses of one motion cue, landing deceleration, is provided as a basis for evaluating how controllers detect and use it for spacing aircraft on or near the surface. Understanding cues like it will help determine if they can be safely used in a remote/virtual tower in which their presentation may be visually degraded.

  15. Freiburg RNA tools: a central online resource for RNA-focused research and teaching.

    PubMed

    Raden, Martin; Ali, Syed M; Alkhnbashi, Omer S; Busch, Anke; Costa, Fabrizio; Davis, Jason A; Eggenhofer, Florian; Gelhausen, Rick; Georg, Jens; Heyne, Steffen; Hiller, Michael; Kundu, Kousik; Kleinkauf, Robert; Lott, Steffen C; Mohamed, Mostafa M; Mattheis, Alexander; Miladi, Milad; Richter, Andreas S; Will, Sebastian; Wolff, Joachim; Wright, Patrick R; Backofen, Rolf

    2018-05-21

    The Freiburg RNA tools webserver is a well established online resource for RNA-focused research. It provides a unified user interface and comprehensive result visualization for efficient command line tools. The webserver includes RNA-RNA interaction prediction (IntaRNA, CopraRNA, metaMIR), sRNA homology search (GLASSgo), sequence-structure alignments (LocARNA, MARNA, CARNA, ExpaRNA), CRISPR repeat classification (CRISPRmap), sequence design (antaRNA, INFO-RNA, SECISDesign), structure aberration evaluation of point mutations (RaSE), and RNA/protein-family models visualization (CMV), and other methods. Open education resources offer interactive visualizations of RNA structure and RNA-RNA interaction prediction as well as basic and advanced sequence alignment algorithms. The services are freely available at http://rna.informatik.uni-freiburg.de.

  16. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  17. Cortical Neural Synchronization Underlies Primary Visual Consciousness of Qualia: Evidence from Event-Related Potentials

    PubMed Central

    Babiloni, Claudio; Marzano, Nicola; Soricelli, Andrea; Cordone, Susanna; Millán-Calenti, José Carlos; Del Percio, Claudio; Buján, Ana

    2016-01-01

    This article reviews three experiments on event-related potentials (ERPs) testing the hypothesis that primary visual consciousness (stimulus self-report) is related to enhanced cortical neural synchronization as a function of stimulus features. ERP peak latency and sources were compared between “seen” trials and “not seen” trials, respectively related and unrelated to the primary visual consciousness. Three salient features of visual stimuli were considered (visuospatial, emotional face expression, and written words). Results showed the typical visual ERP components in both “seen” and “not seen” trials. There was no statistical difference in the ERP peak latencies between the “seen” and “not seen” trials, suggesting a similar timing of the cortical neural synchronization regardless the primary visual consciousness. In contrast, ERP sources showed differences between “seen” and “not seen” trials. For the visuospatial stimuli, the primary consciousness was related to higher activity in dorsal occipital and parietal sources at about 400 ms post-stimulus. For the emotional face expressions, there was greater activity in parietal and frontal sources at about 180 ms post-stimulus. For the written letters, there was higher activity in occipital, parietal and temporal sources at about 230 ms post-stimulus. These results hint that primary visual consciousness is associated with an enhanced cortical neural synchronization having entirely different spatiotemporal characteristics as a function of the features of the visual stimuli and possibly, the relative qualia (i.e., visuospatial, face expression, and words). In this framework, the dorsal visual stream may be synchronized in association with the primary consciousness of visuospatial and emotional face contents. Analogously, both dorsal and ventral visual streams may be synchronized in association with the primary consciousness of linguistic contents. In this line of reasoning, the ensemble of the cortical neural networks underpinning the single visual features would constitute a sort of multi-dimensional palette of colors, shapes, regions of the visual field, movements, emotional face expressions, and words. The synchronization of one or more of these cortical neural networks, each with its peculiar timing, would produce the primary consciousness of one or more of the visual features of the scene. PMID:27445750

  18. Discriminative prediction of mammalian enhancers from DNA sequence

    PubMed Central

    Lee, Dongwon; Karchin, Rachel; Beer, Michael A.

    2011-01-01

    Accurately predicting regulatory sequences and enhancers in entire genomes is an important but difficult problem, especially in large vertebrate genomes. With the advent of ChIP-seq technology, experimental detection of genome-wide EP300/CREBBP bound regions provides a powerful platform to develop predictive tools for regulatory sequences and to study their sequence properties. Here, we develop a support vector machine (SVM) framework which can accurately identify EP300-bound enhancers using only genomic sequence and an unbiased set of general sequence features. Moreover, we find that the predictive sequence features identified by the SVM classifier reveal biologically relevant sequence elements enriched in the enhancers, but we also identify other features that are significantly depleted in enhancers. The predictive sequence features are evolutionarily conserved and spatially clustered, providing further support of their functional significance. Although our SVM is trained on experimental data, we also predict novel enhancers and show that these putative enhancers are significantly enriched in both ChIP-seq signal and DNase I hypersensitivity signal in the mouse brain and are located near relevant genes. Finally, we present results of comparisons between other EP300/CREBBP data sets using our SVM and uncover sequence elements enriched and/or depleted in the different classes of enhancers. Many of these sequence features play a role in specifying tissue-specific or developmental-stage-specific enhancer activity, but our results indicate that some features operate in a general or tissue-independent manner. In addition to providing a high confidence list of enhancer targets for subsequent experimental investigation, these results contribute to our understanding of the general sequence structure of vertebrate enhancers. PMID:21875935

  19. Feature Binding in Visual Working Memory Evaluated by Type Identification Paradigm

    ERIC Educational Resources Information Center

    Saiki, Jun; Miyatsuji, Hirofumi

    2007-01-01

    Memory for feature binding comprises a key ingredient in coherent object representations. Previous studies have been equivocal about human capacity for objects in the visual working memory. To evaluate memory for feature binding, a type identification paradigm was devised and used with a multiple-object permanence tracking task. Using objects…

  20. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    PubMed

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  1. Visual Pattern Analysis in Histopathology Images Using Bag of Features

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.

    This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.

  2. Facial recognition using multisensor images based on localized kernel eigen spaces.

    PubMed

    Gundimada, Satyanadh; Asari, Vijayan K

    2009-06-01

    A feature selection technique along with an information fusion procedure for improving the recognition accuracy of a visual and thermal image-based facial recognition system is presented in this paper. A novel modular kernel eigenspaces approach is developed and implemented on the phase congruency feature maps extracted from the visual and thermal images individually. Smaller sub-regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are then projected into higher dimensional spaces using kernel methods. The proposed localized nonlinear feature selection procedure helps to overcome the bottlenecks of illumination variations, partial occlusions, expression variations and variations due to temperature changes that affect the visual and thermal face recognition techniques. AR and Equinox databases are used for experimentation and evaluation of the proposed technique. The proposed feature selection procedure has greatly improved the recognition accuracy for both the visual and thermal images when compared to conventional techniques. Also, a decision level fusion methodology is presented which along with the feature selection procedure has outperformed various other face recognition techniques in terms of recognition accuracy.

  3. Selection-for-action in visual search.

    PubMed

    Hannus, Aave; Cornelissen, Frans W; Lindemann, Oliver; Bekkering, Harold

    2005-01-01

    Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.

  4. Drawing the line between constituent structure and coherence relations in visual narratives

    PubMed Central

    Cohn, Neil; Bender, Patrick

    2016-01-01

    Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of Visual Narrative Grammar posits that hierarchic “grammatical” structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a “segmentation task” where participants drew lines between images in order to divide them into sub-episodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants’ divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. PMID:27709982

  5. Drawing the line between constituent structure and coherence relations in visual narratives.

    PubMed

    Cohn, Neil; Bender, Patrick

    2017-02-01

    Theories of visual narrative understanding have often focused on the changes in meaning across a sequence, like shifts in characters, spatial location, and causation, as cues for breaks in the structure of a discourse. In contrast, the theory of visual narrative grammar posits that hierarchic "grammatical" structures operate at the discourse level using categorical roles for images, which may or may not co-occur with shifts in coherence. We therefore examined the relationship between narrative structure and coherence shifts in the segmentation of visual narrative sequences using a "segmentation task" where participants drew lines between images in order to divide them into subepisodes. We used regressions to analyze the influence of the expected constituent structure boundary, narrative categories, and semantic coherence relationships on the segmentation of visual narrative sequences. Narrative categories were a stronger predictor of segmentation than linear coherence relationships between panels, though both influenced participants' divisions. Altogether, these results support the theory that meaningful sequential images use a narrative grammar that extends above and beyond linear semantic shifts between discourse units. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  7. Discovery of Extended Main-sequence Turnoffs in Four Young Massive Clusters in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Li, Chengyuan; de Grijs, Richard; Deng, Licai; Milone, Antonino P.

    2017-08-01

    An increasing number of young massive clusters (YMCs) in the Magellanic Clouds have been found to exhibit bimodal or extended main sequences (MSs) in their color-magnitude diagrams (CMDs). These features are usually interpreted in terms of a coeval stellar population with different stellar rotational rates, where the blue and red MS stars are populated by non- (or slowly) and rapidly rotating stellar populations, respectively. However, some studies have shown that an age spread of several million years is required to reproduce the observed wide turnoff regions in some YMCs. Here we present the ultraviolet-visual CMDs of four Large and Small Magellanic Cloud YMCs, NGC 330, NGC 1805, NGC 1818, and NGC 2164, based on high-precision Hubble Space Telescope photometry. We show that they all exhibit extended main-sequence turnoffs (MSTOs). The importance of age spreads and stellar rotation in reproducing the observations is investigated. The observed extended MSTOs cannot be explained by stellar rotation alone. Adopting an age spread of 35-50 Myr can alleviate this difficulty. We conclude that stars in these clusters are characterized by ranges in both their ages and rotation properties, but the origin of the age spread in these clusters remains unknown.

  8. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  9. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  10. Bolbase: a comprehensive genomics database for Brassica oleracea.

    PubMed

    Yu, Jingyin; Zhao, Meixia; Wang, Xiaowu; Tong, Chaobo; Huang, Shunmou; Tehrim, Sadia; Liu, Yumei; Hua, Wei; Liu, Shengyi

    2013-09-30

    Brassica oleracea is a morphologically diverse species in the family Brassicaceae and contains a group of nutrition-rich vegetable crops, including common heading cabbage, cauliflower, broccoli, kohlrabi, kale, Brussels sprouts. This diversity along with its phylogenetic membership in a group of three diploid and three tetraploid species, and the recent availability of genome sequences within Brassica provide an unprecedented opportunity to study intra- and inter-species divergence and evolution in this species and its close relatives. We have developed a comprehensive database, Bolbase, which provides access to the B. oleracea genome data and comparative genomics information. The whole genome of B. oleracea is available, including nine fully assembled chromosomes and 1,848 scaffolds, with 45,758 predicted genes, 13,382 transposable elements, and 3,581 non-coding RNAs. Comparative genomics information is available, including syntenic regions among B. oleracea, Brassica rapa and Arabidopsis thaliana, synonymous (Ks) and non-synonymous (Ka) substitution rates between orthologous gene pairs, gene families or clusters, and differences in quantity, category, and distribution of transposable elements on chromosomes. Bolbase provides useful search and data mining tools, including a keyword search, a local BLAST server, and a customized GBrowse tool, which can be used to extract annotations of genome components, identify similar sequences and visualize syntenic regions among species. Users can download all genomic data and explore comparative genomics in a highly visual setting. Bolbase is the first resource platform for the B. oleracea genome and for genomic comparisons with its relatives, and thus it will help the research community to better study the function and evolution of Brassica genomes as well as enhance molecular breeding research. This database will be updated regularly with new features, improvements to genome annotation, and new genomic sequences as they become available. Bolbase is freely available at http://ocri-genomics.org/bolbase.

  11. Evolutionary changes of multiple visual pigment genes in the complete genome of Pacific bluefin tuna

    PubMed Central

    Nakamura, Yoji; Mori, Kazuki; Saitoh, Kenji; Oshima, Kenshiro; Mekuchi, Miyuki; Sugaya, Takuma; Shigenobu, Yuya; Ojima, Nobuhiko; Muta, Shigeru; Fujiwara, Atushi; Yasuike, Motoshige; Oohara, Ichiro; Hirakawa, Hideki; Chowdhury, Vishwajit Sur; Kobayashi, Takanori; Nakajima, Kazuhiro; Sano, Motohiko; Wada, Tokio; Tashiro, Kosuke; Ikeo, Kazuho; Hattori, Masahira; Kuhara, Satoru; Gojobori, Takashi; Inouye, Kiyoshi

    2013-01-01

    Tunas are migratory fishes in offshore habitats and top predators with unique features. Despite their ecological importance and high market values, the open-ocean lifestyle of tuna, in which effective sensing systems such as color vision are required for capture of prey, has been poorly understood. To elucidate the genetic and evolutionary basis of optic adaptation of tuna, we determined the genome sequence of the Pacific bluefin tuna (Thunnus orientalis), using next-generation sequencing technology. A total of 26,433 protein-coding genes were predicted from 16,802 assembled scaffolds. From these, we identified five common fish visual pigment genes: red-sensitive (middle/long-wavelength sensitive; M/LWS), UV-sensitive (short-wavelength sensitive 1; SWS1), blue-sensitive (SWS2), rhodopsin (RH1), and green-sensitive (RH2) opsin genes. Sequence comparison revealed that tuna's RH1 gene has an amino acid substitution that causes a short-wave shift in the absorption spectrum (i.e., blue shift). Pacific bluefin tuna has at least five RH2 paralogs, the most among studied fishes; four of the proteins encoded may be tuned to blue light at the amino acid level. Moreover, phylogenetic analysis suggested that gene conversions have occurred in each of the SWS2 and RH2 loci in a short period. Thus, Pacific bluefin tuna has undergone evolutionary changes in three genes (RH1, RH2, and SWS2), which may have contributed to detecting blue-green contrast and measuring the distance to prey in the blue-pelagic ocean. These findings provide basic information on behavioral traits of predatory fish and, thereby, could help to improve the technology to culture such fish in captivity for resource management. PMID:23781100

  12. Evolutionary changes of multiple visual pigment genes in the complete genome of Pacific bluefin tuna.

    PubMed

    Nakamura, Yoji; Mori, Kazuki; Saitoh, Kenji; Oshima, Kenshiro; Mekuchi, Miyuki; Sugaya, Takuma; Shigenobu, Yuya; Ojima, Nobuhiko; Muta, Shigeru; Fujiwara, Atushi; Yasuike, Motoshige; Oohara, Ichiro; Hirakawa, Hideki; Chowdhury, Vishwajit Sur; Kobayashi, Takanori; Nakajima, Kazuhiro; Sano, Motohiko; Wada, Tokio; Tashiro, Kosuke; Ikeo, Kazuho; Hattori, Masahira; Kuhara, Satoru; Gojobori, Takashi; Inouye, Kiyoshi

    2013-07-02

    Tunas are migratory fishes in offshore habitats and top predators with unique features. Despite their ecological importance and high market values, the open-ocean lifestyle of tuna, in which effective sensing systems such as color vision are required for capture of prey, has been poorly understood. To elucidate the genetic and evolutionary basis of optic adaptation of tuna, we determined the genome sequence of the Pacific bluefin tuna (Thunnus orientalis), using next-generation sequencing technology. A total of 26,433 protein-coding genes were predicted from 16,802 assembled scaffolds. From these, we identified five common fish visual pigment genes: red-sensitive (middle/long-wavelength sensitive; M/LWS), UV-sensitive (short-wavelength sensitive 1; SWS1), blue-sensitive (SWS2), rhodopsin (RH1), and green-sensitive (RH2) opsin genes. Sequence comparison revealed that tuna's RH1 gene has an amino acid substitution that causes a short-wave shift in the absorption spectrum (i.e., blue shift). Pacific bluefin tuna has at least five RH2 paralogs, the most among studied fishes; four of the proteins encoded may be tuned to blue light at the amino acid level. Moreover, phylogenetic analysis suggested that gene conversions have occurred in each of the SWS2 and RH2 loci in a short period. Thus, Pacific bluefin tuna has undergone evolutionary changes in three genes (RH1, RH2, and SWS2), which may have contributed to detecting blue-green contrast and measuring the distance to prey in the blue-pelagic ocean. These findings provide basic information on behavioral traits of predatory fish and, thereby, could help to improve the technology to culture such fish in captivity for resource management.

  13. Automatic internal crack detection from a sequence of infrared images with a triple-threshold Canny edge detector

    NASA Astrophysics Data System (ADS)

    Wang, Gaochao; Tse, Peter W.; Yuan, Maodan

    2018-02-01

    Visual inspection and assessment of the condition of metal structures are essential for safety. Pulse thermography produces visible infrared images, which have been widely applied to detect and characterize defects in structures and materials. When active thermography, a non-destructive testing tool, is applied, the necessity of considerable manual checking can be avoided. However, detecting an internal crack with active thermography remains difficult, since it is usually invisible in the collected sequence of infrared images, which makes the automatic detection of internal cracks even harder. In addition, the detection of an internal crack can be hindered by a complicated inspection environment. With the purpose of putting forward a robust and automatic visual inspection method, a computer vision-based thresholding method is proposed. In this paper, the image signals are a sequence of infrared images collected from the experimental setup with a thermal camera and two flash lamps as stimulus. The contrast of pixels in each frame is enhanced by the Canny operator and then reconstructed by a triple-threshold system. Two features, mean value in the time domain and maximal amplitude in the frequency domain, are extracted from the reconstructed signal to help distinguish the crack pixels from others. Finally, a binary image indicating the location of the internal crack is generated by a K-means clustering method. The proposed procedure has been applied to an iron pipe, which contains two internal cracks and surface abrasion. Some improvements have been made for the computer vision-based automatic crack detection methods. In the future, the proposed method can be applied to realize the automatic detection of internal cracks from many infrared images for the industry.

  14. Dynamics of feature categorization.

    PubMed

    Martí, Daniel; Rinzel, John

    2013-01-01

    In visual and auditory scenes, we are able to identify shared features among sensory objects and group them according to their similarity. This grouping is preattentive and fast and is thought of as an elementary form of categorization by which objects sharing similar features are clustered in some abstract perceptual space. It is unclear what neuronal mechanisms underlie this fast categorization. Here we propose a neuromechanistic model of fast feature categorization based on the framework of continuous attractor networks. The mechanism for category formation does not rely on learning and is based on biologically plausible assumptions, for example, the existence of populations of neurons tuned to feature values, feature-specific interactions, and subthreshold-evoked responses upon the presentation of single objects. When the network is presented with a sequence of stimuli characterized by some feature, the network sums the evoked responses and provides a running estimate of the distribution of features in the input stream. If the distribution of features is structured into different components or peaks (i.e., is multimodal), recurrent excitation amplifies the response of activated neurons, and categories are singled out as emerging localized patterns of elevated neuronal activity (bumps), centered at the centroid of each cluster. The emergence of bump states through sequential, subthreshold activation and the dependence on input statistics is a novel application of attractor networks. We show that the extraction and representation of multiple categories are facilitated by the rich attractor structure of the network, which can sustain multiple stable activity patterns for a robust range of connectivity parameters compatible with cortical physiology.

  15. Visual search deficits in amblyopia.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  16. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    PubMed

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  17. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  18. Shape and color conjunction stimuli are represented as bound objects in visual working memory.

    PubMed

    Luria, Roy; Vogel, Edward K

    2011-05-01

    The integrated object view of visual working memory (WM) argues that objects (rather than features) are the building block of visual WM, so that adding an extra feature to an object does not result in any extra cost to WM capacity. Alternative views have shown that complex objects consume additional WM storage capacity so that it may not be represented as bound objects. Additionally, it was argued that two features from the same dimension (i.e., color-color) do not form an integrated object in visual WM. This led some to argue for a "weak" object view of visual WM. We used the contralateral delay activity (the CDA) as an electrophysiological marker of WM capacity, to test those alternative hypotheses to the integrated object account. In two experiments we presented complex stimuli and color-color conjunction stimuli, and compared performance in displays that had one object but varying degrees of feature complexity. The results supported the integrated object account by showing that the CDA amplitude corresponded to the number of objects regardless of the number of features within each object, even for complex objects or color-color conjunction stimuli. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Neural correlates of context-dependent feature conjunction learning in visual search tasks.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Greenlee, Mark W; Tse, Peter U

    2016-06-01

    Many perceptual learning experiments show that repeated exposure to a basic visual feature such as a specific orientation or spatial frequency can modify perception of that feature, and that those perceptual changes are associated with changes in neural tuning early in visual processing. Such perceptual learning effects thus exert a bottom-up influence on subsequent stimulus processing, independent of task-demands or endogenous influences (e.g., volitional attention). However, it is unclear whether such bottom-up changes in perception can occur as more complex stimuli such as conjunctions of visual features are learned. It is not known whether changes in the efficiency with which people learn to process feature conjunctions in a task (e.g., visual search) reflect true bottom-up perceptual learning versus top-down, task-related learning (e.g., learning better control of endogenous attention). Here we show that feature conjunction learning in visual search leads to bottom-up changes in stimulus processing. First, using fMRI, we demonstrate that conjunction learning in visual search has a distinct neural signature: an increase in target-evoked activity relative to distractor-evoked activity (i.e., a relative increase in target salience). Second, we demonstrate that after learning, this neural signature is still evident even when participants passively view learned stimuli while performing an unrelated, attention-demanding task. This suggests that conjunction learning results in altered bottom-up perceptual processing of the learned conjunction stimuli (i.e., a perceptual change independent of the task). We further show that the acquired change in target-evoked activity is contextually dependent on the presence of distractors, suggesting that search array Gestalts are learned. Hum Brain Mapp 37:2319-2330, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Feature-based attention elicits surround suppression in feature space.

    PubMed

    Störmer, Viola S; Alvarez, George A

    2014-09-08

    It is known that focusing attention on a particular feature (e.g., the color red) facilitates the processing of all objects in the visual field containing that feature [1-7]. Here, we show that such feature-based attention not only facilitates processing but also actively inhibits processing of similar, but not identical, features globally across the visual field. We combined behavior and electrophysiological recordings of frequency-tagged potentials in human observers to measure this inhibitory surround in feature space. We found that sensory signals of an attended color (e.g., red) were enhanced, whereas sensory signals of colors similar to the target color (e.g., orange) were suppressed relative to colors more distinct from the target color (e.g., yellow). Importantly, this inhibitory effect spreads globally across the visual field, thus operating independently of location. These findings suggest that feature-based attention comprises an excitatory peak surrounded by a narrow inhibitory zone in color space to attenuate the most distracting and potentially confusable stimuli during visual perception. This selection profile is akin to what has been reported for location-based attention [8-10] and thus suggests that such center-surround mechanisms are an overarching principle of attention across different domains in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    NASA Astrophysics Data System (ADS)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  2. Hemispheric asymmetries of a motor memory in a recognition test after learning a movement sequence.

    PubMed

    Leinen, Peter; Panzer, Stefan; Shea, Charles H

    2016-11-01

    Two experiments utilizing a spatial-temporal movement sequence were designed to determine if the memory of the sequence is lateralized in the left or right hemisphere. In Experiment 1, dominant right-handers were randomly assigned to one of two acquisition groups: a left-hand starter and a right-hand starter group. After an acquisition phase, reaction time (RT) was measured in a recognition test by providing the learned sequential pattern in the left or right visual half-field for 150ms. In a retention test and two transfer tests the dominant coordinate system for sequence production was evaluated. In Experiment 2 dominant left-handers and dominant right-handers had to acquire the sequence with their dominant limb. The results of Experiment 1 indicated that RT was significantly shorter when the acquired sequence was provided in the right visual field during the recognition test. The same results occurred in Experiment 2 for dominant right-handers and left-handers. These results indicated a right visual field left hemisphere advantage in the recognition test for the practiced stimulus for dominant left and right-handers, when the task was practiced with the dominant limb. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. The Advanced Glaucoma Intervention Study (AGIS): 4. Comparison of treatment outcomes within race. Seven-year results.

    PubMed

    1998-07-01

    The purpose of this report is to present separately for black and white patients with advanced glaucoma 7-year results of two alternative surgical intervention sequences. A randomized controlled trial. A total of 332 black patients (451 eyes), 249 white patients (325 eyes), and 10 patients of other races (13 eyes) participated. Potential follow-up ranged from 4 to 7 years. Eyes were randomly assigned to either an argon laser trabeculoplasty (ALT)-trabeculectomy-trabeculectomy (ATT) sequence or a trabeculectomy-ALT-trabeculectomy (TAT) sequence. The second and third interventions were offered after failure of the first and second interventions, respectively. Average percent of eyes with decrease of visual field (APDVF), average percent of eyes with decrease of visual acuity (APDVA), and average percent of eyes with decrease of vision (APDV) are the outcome measures. Decrease of visual field (DVF) is an increase from baseline of at least 4 points on a glaucoma visual field defect scale ranging from 0 to 20, decrease of visual acuity (DVA) is a decrease from baseline of at least 15 letters (3 lines), and decrease of vision (DV) is the occurrence of either DVF or DVA. The averages are of percent decreases observed at 6-month intervals from the first 6-month visit to the end of the specified observation period. In both black and white patients throughout 7-year follow-up, the mean decrease in intraocular pressure was greater in eyes assigned to TAT, and the cumulative probability of failure of the first intervention was greater in eyes assigned to ATT. In black patients, APDVF, APDVA, and APDV are less for the ATT sequence than for the TAT sequence throughout the 7 years. In white patients, APDVF also favors the ATT sequence but only for the first year, after which it favors the TAT sequence through the seventh year; APDVA also favors the ATT sequence, but the ATT-TAT difference progressively diminishes over 7 years; and APDV favors ATT over TAT initially, but after 4 years, the advantage switches to and remains with TAT. These data support use of the ATT sequence for all black patients. For white patients without life-threatening health problems, the data support use of the TAT sequence.

  4. HRAS mutations in Costello syndrome: detection of constitutional activating mutations in codon 12 and 13 and loss of wild-type allele in malignancy.

    PubMed

    Estep, Anne L; Tidyman, William E; Teitell, Michael A; Cotter, Philip D; Rauen, Katherine A

    2006-01-01

    Costello syndrome (CS) is a complex developmental disorder involving characteristic craniofacial features, failure to thrive, developmental delay, cardiac and skeletal anomalies, and a predisposition to develop neoplasia. Based on similarities with other cancer syndromes, we previously hypothesized that CS is likely due to activation of signal transduction through the Ras/MAPK pathway [Tartaglia et al., 2003]. In this study, the HRAS coding region was sequenced for mutations in a large, well-characterized cohort of 36 CS patients. Heterogeneous missense point mutations predicting an amino acid substitution were identified in 33/36 (92%) patients. The majority (91%) had a 34G --> A transition in codon 12. Less frequent mutations included 35G --> C (codon 12) and 37G --> T (codon 13). Parental samples did not have an HRAS mutation supporting the hypothesis of de novo heterogeneous mutations. There is phenotypic variability among patients with a 34G --> A transition. The most consistent features included characteristic facies and skin, failure to thrive, developmental delay, musculoskeletal abnormalities, visual impairment, cardiac abnormalities, and generalized hyperpigmentation. The two patients with 35G --> C had cardiac arrhythmias whereas one patient with a 37G --> T transversion had an enlarged aortic root. Of the patients with a clinical diagnosis of CS, neoplasia was the most consistent phenotypic feature for predicating an HRAS mutation. To gain an understanding of the relationship between constitutional HRAS mutations and malignancy, HRAS was sequenced in an advanced biphasic rhabdomyosarcoma/fibrosarcoma from an individual with a 34G --> A mutation. Loss of the wild-type HRAS allele was observed, suggesting tumorigenesis in CS patients is accompanied by additional somatic changes affecting HRAS. Finally, due to phenotypic overlap between CS and cardio-facio-cutaneous (CFC) syndromes, the HRAS coding region was sequenced in a well-characterized CFC cohort. No mutations were found which support a distinct genetic etiology between CS and CFC syndromes. (c) 2005 Wiley-Liss, Inc.

  5. Obligatory encoding of task-irrelevant features depletes working memory resources.

    PubMed

    Marshall, Louise; Bays, Paul M

    2013-02-18

    Selective attention is often considered the "gateway" to visual working memory (VWM). However, the extent to which we can voluntarily control which of an object's features enter memory remains subject to debate. Recent research has converged on the concept of VWM as a limited commodity distributed between elements of a visual scene. Consequently, as memory load increases, the fidelity with which each visual feature is stored decreases. Here we used changes in recall precision to probe whether task-irrelevant features were encoded into VWM when individuals were asked to store specific feature dimensions. Recall precision for both color and orientation was significantly enhanced when task-irrelevant features were removed, but knowledge of which features would be probed provided no advantage over having to memorize both features of all items. Next, we assessed the effect an interpolated orientation-or color-matching task had on the resolution with which orientations in a memory array were stored. We found that the presence of orientation information in the second array disrupted memory of the first array. The cost to recall precision was identical whether the interfering features had to be remembered, attended to, or could be ignored. Therefore, it appears that storing, or merely attending to, one feature of an object is sufficient to promote automatic encoding of all its features, depleting VWM resources. However, the precision cost was abolished when the match task preceded the memory array. So, while encoding is automatic, maintenance is voluntary, allowing resources to be reallocated to store new visual information.

  6. The effect of gamma-enhancing binaural beats on the control of feature bindings.

    PubMed

    Colzato, Lorenza S; Steenbergen, Laura; Sellaro, Roberta

    2017-07-01

    Binaural beats represent the auditory experience of an oscillating sound that occurs when two sounds with neighboring frequencies are presented to one's left and right ear separately. Binaural beats have been shown to impact information processing via their putative role in increasing neural synchronization. Recent studies of feature-repetition effects demonstrated interactions between perceptual features and action-related features: repeating only some, but not all features of a perception-action episode hinders performance. These partial-repetition (or binding) costs point to the existence of temporary episodic bindings (event files) that are automatically retrieved by repeating at least one of their features. Given that neural synchronization in the gamma band has been associated with visual feature bindings, we investigated whether the impact of binaural beats extends to the top-down control of feature bindings. Healthy adults listened to gamma-frequency (40 Hz) binaural beats or to a constant tone of 340 Hz (control condition) for ten minutes before and during a feature-repetition task. While the size of visuomotor binding costs (indicating the binding of visual and action features) was unaffected by the binaural beats, the size of visual feature binding costs (which refer to the binding between the two visual features) was considerably smaller during gamma-frequency binaural beats exposure than during the control condition. Our results suggest that binaural beats enhance selectivity in updating episodic memory traces and further strengthen the hypothesis that neural activity in the gamma band is critically associated with the control of feature binding.

  7. Form cues and content difficulty as determinants of children's cognitive processing of televised educational messages.

    PubMed

    Campbell, T A; Wright, J C; Huston, A C

    1987-06-01

    An experiment was designed to assess the effects of formal production features and content difficulty on children's processing of televised messages about nutrition. Messages with identical content (the same script and visual shot sequence) were made in two forms: child program forms (animated film, second-person address, and character voice narration with sprightly music) and adult program forms (live photography, third-person address, and adult male narration with sedate background music). For each form, messages were made at three levels of content difficulty. Easier versions were longer, more redundant, and used simpler language; difficult versions presented information more quickly with less redundancy and more abstract language. Regardless of form or difficulty level, each set of bits presented the same basic information. Kindergarten children (N = 120) were assigned to view three different bits of the same form type and difficulty embedded in a miniprogram. Visual attention to child forms was significantly greater than to adult forms; free and cued recall scores were also higher for child than for adult forms. Although all recall and recognition scores were best for easy versions and worst for difficult versions, attention showed only minor variation as a function of content difficulty. Results are interpreted to indicate that formal production features, independently of content, influence the effort and level of processing that children use to understand televised educational messages.

  8. Visualization of genome signatures of eukaryote genomes by batch-learning self-organizing map with a special emphasis on Drosophila genomes.

    PubMed

    Abe, Takashi; Hamano, Yuta; Ikemura, Toshimichi

    2014-01-01

    A strategy of evolutionary studies that can compare vast numbers of genome sequences is becoming increasingly important with the remarkable progress of high-throughput DNA sequencing methods. We previously established a sequence alignment-free clustering method "BLSOM" for di-, tri-, and tetranucleotide compositions in genome sequences, which can characterize sequence characteristics (genome signatures) of a wide range of species. In the present study, we generated BLSOMs for tetra- and pentanucleotide compositions in approximately one million sequence fragments derived from 101 eukaryotes, for which almost complete genome sequences were available. BLSOM recognized phylotype-specific characteristics (e.g., key combinations of oligonucleotide frequencies) in the genome sequences, permitting phylotype-specific clustering of the sequences without any information regarding the species. In our detailed examination of 12 Drosophila species, the correlation between their phylogenetic classification and the classification on the BLSOMs was observed to visualize oligonucleotides diagnostic for species-specific clustering.

  9. RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis

    PubMed Central

    Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab

    2012-01-01

    RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. Availability http://www.cemb.edu.pk/sw.html Abbreviations RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language. PMID:23055611

  10. Analysis and Prediction of Exon Skipping Events from RNA-Seq with Sequence Information Using Rotation Forest.

    PubMed

    Du, Xiuquan; Hu, Changlin; Yao, Yu; Sun, Shiwei; Zhang, Yanping

    2017-12-12

    In bioinformatics, exon skipping (ES) event prediction is an essential part of alternative splicing (AS) event analysis. Although many methods have been developed to predict ES events, a solution has yet to be found. In this study, given the limitations of machine learning algorithms with RNA-Seq data or genome sequences, a new feature, called RS (RNA-seq and sequence) features, was constructed. These features include RNA-Seq features derived from the RNA-Seq data and sequence features derived from genome sequences. We propose a novel Rotation Forest classifier to predict ES events with the RS features (RotaF-RSES). To validate the efficacy of RotaF-RSES, a dataset from two human tissues was used, and RotaF-RSES achieved an accuracy of 98.4%, a specificity of 99.2%, a sensitivity of 94.1%, and an area under the curve (AUC) of 98.6%. When compared to the other available methods, the results indicate that RotaF-RSES is efficient and can predict ES events with RS features.

  11. Dorsal hippocampus is necessary for visual categorization in rats.

    PubMed

    Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H

    2018-02-23

    The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.

  12. ABACAS: algorithm-based automatic contiguation of assembled sequences

    PubMed Central

    Assefa, Samuel; Keane, Thomas M.; Otto, Thomas D.; Newbold, Chris; Berriman, Matthew

    2009-01-01

    Summary: Due to the availability of new sequencing technologies, we are now increasingly interested in sequencing closely related strains of existing finished genomes. Recently a number of de novo and mapping-based assemblers have been developed to produce high quality draft genomes from new sequencing technology reads. New tools are necessary to take contigs from a draft assembly through to a fully contiguated genome sequence. ABACAS is intended as a tool to rapidly contiguate (align, order, orientate), visualize and design primers to close gaps on shotgun assembled contigs based on a reference sequence. The input to ABACAS is a set of contigs which will be aligned to the reference genome, ordered and orientated, visualized in the ACT comparative browser, and optimal primer sequences are automatically generated. Availability and Implementation: ABACAS is implemented in Perl and is freely available for download from http://abacas.sourceforge.net Contact: sa4@sanger.ac.uk PMID:19497936

  13. Grading of Gliomas by Using Radiomic Features on Multiple Magnetic Resonance Imaging (MRI) Sequences.

    PubMed

    Qin, Jiang-Bo; Liu, Zhenyu; Zhang, Hui; Shen, Chen; Wang, Xiao-Chun; Tan, Yan; Wang, Shuo; Wu, Xiao-Feng; Tian, Jie

    2017-05-07

    BACKGROUND Gliomas are the most common primary brain neoplasms. Misdiagnosis occurs in glioma grading due to an overlap in conventional MRI manifestations. The aim of the present study was to evaluate the power of radiomic features based on multiple MRI sequences - T2-Weighted-Imaging-FLAIR (FLAIR), T1-Weighted-Imaging-Contrast-Enhanced (T1-CE), and Apparent Diffusion Coefficient (ADC) map - in glioma grading, and to improve the power of glioma grading by combining features. MATERIAL AND METHODS Sixty-six patients with histopathologically proven gliomas underwent T2-FLAIR and T1WI-CE sequence scanning with some patients (n=63) also undergoing DWI scanning. A total of 114 radiomic features were derived with radiomic methods by using in-house software. All radiomic features were compared between high-grade gliomas (HGGs) and low-grade gliomas (LGGs). Features with significant statistical differences were selected for receiver operating characteristic (ROC) curve analysis. The relationships between significantly different radiomic features and glial fibrillary acidic protein (GFAP) expression were evaluated. RESULTS A total of 8 radiomic features from 3 MRI sequences displayed significant differences between LGGs and HGGs. FLAIR GLCM Cluster Shade, T1-CE GLCM Entropy, and ADC GLCM Homogeneity were the best features to use in differentiating LGGs and HGGs in each MRI sequence. The combined feature was best able to differentiate LGGs and HGGs, which improved the accuracy of glioma grading compared to the above features in each MRI sequence. A significant correlation was found between GFAP and T1-CE GLCM Entropy, as well as between GFAP and ADC GLCM Homogeneity. CONCLUSIONS The combined radiomic feature had the highest efficacy in distinguishing LGGs from HGGs.

  14. The Role of Target-Distractor Relationships in Guiding Attention and the Eyes in Visual Search

    ERIC Educational Resources Information Center

    Becker, Stefanie I.

    2010-01-01

    Current models of visual search assume that visual attention can be guided by tuning attention toward specific feature values (e.g., particular size, color) or by inhibiting the features of the irrelevant nontargets. The present study demonstrates that attention and eye movements can also be guided by a relational specification of how the target…

  15. Reducing Problems in Fine Motor Development among Primary Children through the Use of Multi-Sensory Techniques.

    ERIC Educational Resources Information Center

    Wessel, Dorothy

    A 10-week classroom intervention program was implemented to facilitate the fine-motor development of eight first-grade children assessed as being deficient in motor skills. The program was divided according to five deficits to be remediated: visual motor, visual discrimination, visual sequencing, visual figure-ground, and visual memory. Each area…

  16. Characterization of electroencephalography signals for estimating saliency features in videos.

    PubMed

    Liang, Zhen; Hamada, Yasuyuki; Oba, Shigeyuki; Ishii, Shin

    2018-05-12

    Understanding the functions of the visual system has been one of the major targets in neuroscience formany years. However, the relation between spontaneous brain activities and visual saliency in natural stimuli has yet to be elucidated. In this study, we developed an optimized machine learning-based decoding model to explore the possible relationships between the electroencephalography (EEG) characteristics and visual saliency. The optimal features were extracted from the EEG signals and saliency map which was computed according to an unsupervised saliency model ( Tavakoli and Laaksonen, 2017). Subsequently, various unsupervised feature selection/extraction techniques were examined using different supervised regression models. The robustness of the presented model was fully verified by means of ten-fold or nested cross validation procedure, and promising results were achieved in the reconstruction of saliency features based on the selected EEG characteristics. Through the successful demonstration of using EEG characteristics to predict the real-time saliency distribution in natural videos, we suggest the feasibility of quantifying visual content through measuring brain activities (EEG signals) in real environments, which would facilitate the understanding of cortical involvement in the processing of natural visual stimuli and application developments motivated by human visual processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Sci-Fin: Visual Mining Spatial and Temporal Behavior Features from Social Media

    PubMed Central

    Pu, Jiansu; Teng, Zhiyao; Gong, Rui; Wen, Changjiang; Xu, Yang

    2016-01-01

    Check-in records are usually available in social services, which offer us the opportunity to capture and analyze users’ spatial and temporal behaviors. Mining such behavior features is essential to social analysis and business intelligence. However, the complexity and incompleteness of check-in records bring challenges to achieve such a task. Different from the previous work on social behavior analysis, in this paper, we present a visual analytics system, Social Check-in Fingerprinting (Sci-Fin), to facilitate the analysis and visualization of social check-in data. We focus on three major components of user check-in data: location, activity, and profile. Visual fingerprints for location, activity, and profile are designed to intuitively represent the high-dimensional attributes. To visually mine and demonstrate the behavior features, we integrate WorldMapper and Voronoi Treemap into our glyph-like designs. Such visual fingerprint designs offer us the opportunity to summarize the interesting features and patterns from different check-in locations, activities and users (groups). We demonstrate the effectiveness and usability of our system by conducting extensive case studies on real check-in data collected from a popular microblogging service. Interesting findings are reported and discussed at last. PMID:27999398

  18. Sci-Fin: Visual Mining Spatial and Temporal Behavior Features from Social Media.

    PubMed

    Pu, Jiansu; Teng, Zhiyao; Gong, Rui; Wen, Changjiang; Xu, Yang

    2016-12-20

    Check-in records are usually available in social services, which offer us the opportunity to capture and analyze users' spatial and temporal behaviors. Mining such behavior features is essential to social analysis and business intelligence. However, the complexity and incompleteness of check-in records bring challenges to achieve such a task. Different from the previous work on social behavior analysis, in this paper, we present a visual analytics system, Social Check-in Fingerprinting (Sci-Fin), to facilitate the analysis and visualization of social check-in data. We focus on three major components of user check-in data: location, activity, and profile. Visual fingerprints for location, activity, and profile are designed to intuitively represent the high-dimensional attributes. To visually mine and demonstrate the behavior features, we integrate WorldMapper and Voronoi Treemap into our glyph-like designs. Such visual fingerprint designs offer us the opportunity to summarize the interesting features and patterns from different check-in locations, activities and users (groups). We demonstrate the effectiveness and usability of our system by conducting extensive case studies on real check-in data collected from a popular microblogging service. Interesting findings are reported and discussed at last.

  19. Decoding the future from past experience: learning shapes predictions in early visual cortex.

    PubMed

    Luft, Caroline D B; Meeson, Alan; Welchman, Andrew E; Kourtzi, Zoe

    2015-05-01

    Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex. Copyright © 2015 the American Physiological Society.

  20. Learning and Recognition of Clothing Genres From Full-Body Images.

    PubMed

    Hidayati, Shintami C; You, Chuang-Wen; Cheng, Wen-Huang; Hua, Kai-Lung

    2018-05-01

    According to the theory of clothing design, the genres of clothes can be recognized based on a set of visually differentiable style elements, which exhibit salient features of visual appearance and reflect high-level fashion styles for better describing clothing genres. Instead of using less-discriminative low-level features or ambiguous keywords to identify clothing genres, we proposed a novel approach for automatically classifying clothing genres based on the visually differentiable style elements. A set of style elements, that are crucial for recognizing specific visual styles of clothing genres, were identified based on the clothing design theory. In addition, the corresponding salient visual features of each style element were identified and formulated with variables that can be computationally derived with various computer vision algorithms. To evaluate the performance of our algorithm, a dataset containing 3250 full-body shots crawled from popular online stores was built. Recognition results show that our proposed algorithms achieved promising overall precision, recall, and -score of 88.76%, 88.53%, and 88.64% for recognizing upperwear genres, and 88.21%, 88.17%, and 88.19% for recognizing lowerwear genres, respectively. The effectiveness of each style element and its visual features on recognizing clothing genres was demonstrated through a set of experiments involving different sets of style elements or features. In summary, our experimental results demonstrate the effectiveness of the proposed method in clothing genre recognition.

Top