Science.gov

Sample records for bioinformatics system built

  1. ebTrack: an environmental bioinformatics system built upon ArrayTrack™

    PubMed Central

    Chen, Minjun; Martin, Jackson; Fang, Hong; Isukapalli, Sastry; Georgopoulos, Panos G; Welsh, William J; Tong, Weida

    2009-01-01

    ebTrack is being developed as an integrated bioinformatics system for environmental research and analysis by addressing the issues of integration, curation, management, first level analysis and interpretation of environmental and toxicological data from diverse sources. It is based on enhancements to the US FDA developed ArrayTrack™ system through additional analysis modules for gene expression data as well as through incorporation and linkages to modules for analysis of proteomic and metabonomic datasets that include tandem mass spectra. ebTrack uses a client-server architecture with the free and open source PostgreSQL as its database engine, and java tools for user interface, analysis, visualization, and web-based deployment. Several predictive tools that are critical for environmental health research are currently supported in ebTrack, including Significance Analysis of Microarray (SAM). Furthermore, new tools are under continuous integration, and interfaces to environmental health risk analysis tools are being developed in order to make ebTrack widely usable. These health risk analysis tools include the Modeling ENvironment for TOtal Risk studies (MENTOR) for source-to-dose exposure modeling and the DOse Response Information ANalysis system (DORIAN) for health outcome modeling. The design of ebTrack is presented in detail and steps involved in its application are summarized through an illustrative application. PMID:19278561

  2. Taking Bioinformatics to Systems Medicine.

    PubMed

    van Kampen, Antoine H C; Moerland, Perry D

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.

  3. Bio2RDF: towards a mashup to build bioinformatics knowledge systems.

    PubMed

    Belleau, François; Nolin, Marc-Alexandre; Tourigny, Nicole; Rigault, Philippe; Morissette, Jean

    2008-10-01

    Presently, there are numerous bioinformatics databases available on different websites. Although RDF was proposed as a standard format for the web, these databases are still available in various formats. With the increasing popularity of the semantic web technologies and the ever growing number of databases in bioinformatics, there is a pressing need to develop mashup systems to help the process of bioinformatics knowledge integration. Bio2RDF is such a system, built from rdfizer programs written in JSP, the Sesame open source triplestore technology and an OWL ontology. With Bio2RDF, documents from public bioinformatics databases such as Kegg, PDB, MGI, HGNC and several of NCBI's databases can now be made available in RDF format through a unique URL in the form of http://bio2rdf.org/namespace:id. The Bio2RDF project has successfully applied the semantic web technology to publicly available databases by creating a knowledge space of RDF documents linked together with normalized URIs and sharing a common ontology. Bio2RDF is based on a three-step approach to build mashups of bioinformatics data. The present article details this new approach and illustrates the building of a mashup used to explore the implication of four transcription factor genes in Parkinson's disease. The Bio2RDF repository can be queried at http://bio2rdf.org.

  4. A services oriented system for bioinformatics applications on the grid.

    PubMed

    Aloisio, Giovanni; Cafaro, Massimo; Epicoco, Italo; Fiore, Sandro; Mirto, Maria

    2007-01-01

    This paper describes the evolution of the main services of the ProGenGrid (Proteomics & Genomics Grid) system, a distributed and ubiquitous grid environment ("virtual laboratory"), based on Workflow and supporting the design, execution and monitoring of "in silico" experiments in bioinformatics.ProGenGrid is a Grid-based Problem Solving Environment that allows the composition of data sources and bioinformatics programs wrapped as Web Services (WS). The use of WS provides ease of use and fosters re-use. The resulting workflow of WS is then scheduled on the Grid, leveraging Grid-middleware services. In particular, ProGenGrid offers a modular bag of services and currently is focused on the biological simulation of two important bioinformatics problems: prediction of the secondary structure of proteins, and sequence alignment of proteins. Both services are based on an enhanced data access service.

  5. Systems biology and bioinformatics in aging research: a workshop report.

    PubMed

    Fuellen, Georg; Dengjel, Jörn; Hoeflich, Andreas; Hoeijemakers, Jan; Kestler, Hans A; Kowald, Axel; Priebe, Steffen; Rebholz-Schuhmann, Dietrich; Schmeck, Bernd; Schmitz, Ulf; Stolzing, Alexandra; Sühnel, Jürgen; Wuttke, Daniel; Vera, Julio

    2012-12-01

    In an "aging society," health span extension is most important. As in 2010, talks in this series of meetings in Rostock-Warnemünde demonstrated that aging is an apparently very complex process, where computational work is most useful for gaining insights and to find interventions that counter aging and prevent or counteract aging-related diseases. The specific topics of this year's meeting entitled, "RoSyBA: Rostock Symposium on Systems Biology and Bioinformatics in Ageing Research," were primarily related to "Cancer and Aging" and also had a focus on work funded by the German Federal Ministry of Education and Research (BMBF). The next meeting in the series, scheduled for September 20-21, 2013, will focus on the use of ontologies for computational research into aging, stem cells, and cancer. Promoting knowledge formalization is also at the core of the set of proposed action items concluding this report.

  6. Using Attributes of Natural Systems to Plan the Built Environment

    EPA Science Inventory

    The concept of 'protection' is possible only before something is lost, however, development of the built environment to meet human needs also compromises the environmental systems that sustain human life. Because maintaining an environment that is able to sustain human life requi...

  7. Stroke of GENEous: A Tool for Teaching Bioinformatics to Information Systems Majors

    ERIC Educational Resources Information Center

    Tikekar, Rahul

    2006-01-01

    A tool for teaching bioinformatics concepts to information systems majors is described. Biological data are available from numerous sources and a good knowledge of biology is needed to understand much of these data. As the subject of bioinformatics gains popularity among computer and information science course offerings, it will become essential…

  8. Transformers: Shape-Changing Space Systems Built with Robotic Textiles

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    2013-01-01

    Prior approaches to transformer-like robots had only very limited success. They suffer from lack of reliability, ability to integrate large surfaces, and very modest change in overall shape. Robots can now be built from two-dimensional (2D) layers of robotic fabric. These transformers, a new kind of robotic space system, are dramatically different from current systems in at least two ways. First, the entire transformer is built from a single, thin sheet; a flexible layer of a robotic fabric (ro-fabric); or robotic textile (ro-textile). Second, the ro-textile layer is foldable to small volume and self-unfolding to adapt shape and function to mission phases.

  9. [Bioinformatics studies on photosynthetic system genes in cyanobacteria and chloroplasts].

    PubMed

    Shi, Ding-Ji; Zhang, Chao; Li, Shi-Ming; Li, Ci-Shan; Zhang, Peng-Peng; Yang, Ming-Li

    2004-06-01

    This study compared homology of base sequences in genes encoding photosynthetic system proteins of cyanobacteria (Synechocystics sp. PCC6803, Nostoc sp. PCC7120) with these of chloroplasts (from Marchantia Polymorpha, Nicotiana tobacum, Oryza sativ, Euglena gracilis, Pinus thunbergii, Zea mays, Odentella sinesis, Cyanophora paradoxa, Porphyra purpurea and Arabidopsis thaliana) by BLAST method. While the gene sequence of Synechocystics sp. PCC6803 was considered as the criterion (100%) the homology of others were compared with it. Among the genes for photosystem I, psaC homology was the highest (90.14%) and the lowest was psaJ (52.24%). The highest ones were psbD (83.71%) for photosystem II, atpB (79.58%) for ATP synthase and petB (81.66%) for cytochrome b6/f complex. The lowest ones were psbN (49.70%) for photosystem II, atpF (26.69%) for ATP synthase and petA (55.27%) for cytochrome b6/f complex. Also, this paper discussed why the homology of gene sequences was the highest or the lowest. No report has been published and this bioinformatics research may provide some evidences for the origin and evolution of chloroplasts.

  10. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis.

    PubMed

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.

  11. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis

    PubMed Central

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475

  12. Bioinformatics for transporter pharmacogenomics and systems biology: data integration and modeling with UML.

    PubMed

    Yan, Qing

    2010-01-01

    Bioinformatics is the rational study at an abstract level that can influence the way we understand biomedical facts and the way we apply the biomedical knowledge. Bioinformatics is facing challenges in helping with finding the relationships between genetic structures and functions, analyzing genotype-phenotype associations, and understanding gene-environment interactions at the systems level. One of the most important issues in bioinformatics is data integration. The data integration methods introduced here can be used to organize and integrate both public and in-house data. With the volume of data and the high complexity, computational decision support is essential for integrative transporter studies in pharmacogenomics, nutrigenomics, epigenetics, and systems biology. For the development of such a decision support system, object-oriented (OO) models can be constructed using the Unified Modeling Language (UML). A methodology is developed to build biomedical models at different system levels and construct corresponding UML diagrams, including use case diagrams, class diagrams, and sequence diagrams. By OO modeling using UML, the problems of transporter pharmacogenomics and systems biology can be approached from different angles with a more complete view, which may greatly enhance the efforts in effective drug discovery and development. Bioinformatics resources of membrane transporters and general bioinformatics databases and tools that are frequently used in transporter studies are also collected here. An informatics decision support system based on the models presented here is available at http://www.pharmtao.com/transporter . The methodology developed here can also be used for other biomedical fields.

  13. Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering

    PubMed Central

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-01-01

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering. PMID:24709875

  14. NETTAB 2014: From high-throughput structural bioinformatics to integrative systems biology.

    PubMed

    Romano, Paolo; Cordero, Francesca

    2016-03-02

    The fourteenth NETTAB workshop, NETTAB 2014, was devoted to a range of disciplines going from structural bioinformatics, to proteomics and to integrative systems biology. The topics of the workshop were centred around bioinformatics methods, tools, applications, and perspectives for models, standards and management of high-throughput biological data, structural bioinformatics, functional proteomics, mass spectrometry, drug discovery, and systems biology.43 scientific contributions were presented at NETTAB 2014, including keynote, special guest and tutorial talks, oral communications, and posters. Full papers from some of the best contributions presented at the workshop were later submitted to a special Call for this Supplement.Here, we provide an overview of the workshop and introduce manuscripts that have been accepted for publication in this Supplement.

  15. Systems genetics, bioinformatics and eQTL mapping.

    PubMed

    Li, Hong; Deng, Hongwen

    2010-10-01

    Jansen and Nap (Trends Genet 17(7):388-391, 2001) and Jansen (Nat Rev Genet 4:145-151, 2003) first proposed the concept of genetical genomics, or genome-wide genetic analysis of gene expression data, which is also called transcriptome mapping. In this approach, microarrays are used for measuring gene expression levels across genetic mapping populations. These gene expression patterns have been used for genome-wide association analysis, an analysis referred to as expression QTL (eQTL) mapping. Recent progress in genomics and experimental biology has brought exponential growth of the biological information available for computational analysis in public genomics databases. Bioinformatics is essential to genome-wide analysis of gene expression data and used as an effective tool for eQTL mapping. The use of Plabsoft database, EcoTILLING, GNARE and FastMap allowed for dramatic reduction of time in genome analysis. Some web-based tools (e.g., Lirnet, eQTL Viewer) provide efficient and intuitive ways for biologists to explore transcriptional regulation patterns, and to generate hypotheses on the genetic basis of transcriptional regulations. Expression quantitative trait loci (eQTL) mapping concerns finding genomic variation to elucidate variation of expression traits. This problem poses significant challenges due to high dimensionality of both the gene expression and the genomic marker data. The core challenges in understanding and explaining eQTL associations are the fine mapping and the lack of mechanistic explanation. But with the development of genetical genomics and computer technology, many new approaches for eQTL mapping will emerge. The statistical methods used for the analysis of expression QTL will become mature in the future.

  16. Advances in omics and bioinformatics tools for systems analyses of plant functions.

    PubMed

    Mochida, Keiichi; Shinozaki, Kazuo

    2011-12-01

    Omics and bioinformatics are essential to understanding the molecular systems that underlie various plant functions. Recent game-changing sequencing technologies have revitalized sequencing approaches in genomics and have produced opportunities for various emerging analytical applications. Driven by technological advances, several new omics layers such as the interactome, epigenome and hormonome have emerged. Furthermore, in several plant species, the development of omics resources has progressed to address particular biological properties of individual species. Integration of knowledge from omics-based research is an emerging issue as researchers seek to identify significance, gain biological insights and promote translational research. From these perspectives, we provide this review of the emerging aspects of plant systems research based on omics and bioinformatics analyses together with their associated resources and technological advances.

  17. STRUCTURELAB: a heterogeneous bioinformatics system for RNA structure analysis.

    PubMed

    Shapiro, B A; Kasprzak, W

    1996-08-01

    STRUCTURELAB is a computational system that has been developed to permit the use of a broad array of approaches for the analysis of the structure of RNA. The goal of the development is to provide a large set of tools that can be well integrated with experimental biology to aid in the process of the determination of the underlying structure of RNA sequences. The approach taken views the structure determination problem as one of dealing with a database of many computationally generated structures and provides the capability to analyze this data set from different perspectives. Many algorithms are integrated into one system that also utilizes a heterogeneous computing approach permitting the use of several computer architectures to help solve the posed problems. These different computational platforms make it relatively easy to incorporate currently existing programs as well as newly developed algorithms and to best match these algorithms to the appropriate hardware. The system has been written in Common Lisp running on SUN or SGI Unix workstations, and it utilizes a network of participating machines defined in reconfigurable tables. A window-based interface makes this heterogeneous environment as transparent to the user as possible. PMID:9076633

  18. Prediction of food protein allergenicity: a bioinformatic learning systems approach.

    PubMed

    Zorzet, Anna; Gustafsson, Mats; Hammerling, Ulf

    2002-01-01

    Food hypersensitivity is constantly increasing in Western societies with a prevalence of about 1-2% in Europe and in the USA. Among children, the incidence is even higher. Because of the introduction of foods derived from genetically modified crops on the marketplace, the scientific community, regulatory bodies and international associations have intensified discussions on risk assessment procedures to identify potential food allergenicity of the newly introduced proteins. In this work, we present a novel biocomputational methodology for the classification of amino acid sequences with regard to food allergenicity and non-allergenicity. This method relies on a computerised learning system trained using selected excerpts of amino acid sequences. One example of such a successful learning system is presented which consists of feature extraction from sequence alignments performed with the FASTA3 algorithm (employing the BLOSUM50 substitution matrix) combined with the k-Nearest-Neighbour (kNN) classification algorithm. Briefly, the two features extracted are the alignment score and the alignment length and the kNN algorithm assigns the pair of extracted features from an unknown sequence to the prevalent class among its k nearest neighbours in the training (prototype) set available. 91 food allergens from several specialised public repositories of food allergy and the SWALL database were identified, pre-processed, and stored, yielding one of the most extensively characterised repositories of allergenic sequences known today. All allergenic sequences were classified using a standard one-leave-out cross validation procedure yielding about 81% correctly classified allergens and the classification of 367 non-allergens in an independent test set resulted in about 98% correct classifications. The biocomputational approach presented should be regarded as a significant extension and refinement of earlier attempts suggested for in silico food safety assessment. Our results show

  19. Role of remote sensing, geographical information system (GIS) and bioinformatics in kala-azar epidemiology.

    PubMed

    Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep

    2011-11-01

    Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another.

  20. Role of remote sensing, geographical information system (GIS) and bioinformatics in kala-azar epidemiology

    PubMed Central

    Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep

    2011-01-01

    Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another. PMID:23554714

  1. Quantitative Analysis of the Trends Exhibited by the Three Interdisciplinary Biological Sciences: Biophysics, Bioinformatics, and Systems Biology.

    PubMed

    Kang, Jonghoon; Park, Seyeon; Venkat, Aarya; Gopinath, Adarsh

    2015-12-01

    New interdisciplinary biological sciences like bioinformatics, biophysics, and systems biology have become increasingly relevant in modern science. Many papers have suggested the importance of adding these subjects, particularly bioinformatics, to an undergraduate curriculum; however, most of their assertions have relied on qualitative arguments. In this paper, we will show our metadata analysis of a scientific literature database (PubMed) that quantitatively describes the importance of the subjects of bioinformatics, systems biology, and biophysics as compared with a well-established interdisciplinary subject, biochemistry. Specifically, we found that the development of each subject assessed by its publication volume was well described by a set of simple nonlinear equations, allowing us to characterize them quantitatively. Bioinformatics, which had the highest ratio of publications produced, was predicted to grow between 77% and 93% by 2025 according to the model. Due to the large number of publications produced in bioinformatics, which nearly matches the number published in biochemistry, it can be inferred that bioinformatics is almost equal in significance to biochemistry. Based on our analysis, we suggest that bioinformatics be added to the standard biology undergraduate curriculum. Adding this course to an undergraduate curriculum will better prepare students for future research in biology.

  2. Early Warning System: a juridical notion to be built

    NASA Astrophysics Data System (ADS)

    Lucarelli, A.

    2007-12-01

    Early warning systems (EWS) are becoming effective tools for real time mitigation of the harmful effects arising from widely different hazards, which range from famine to financial crisis, malicious attacks, industrial accidents, natural catastrophes, etc. Early warning of natural catastrophic events allows to implement both alert systems and real time prevention actions for the safety of people and goods exposed to the risk However the effective implementation of early warning methods is hindered by the lack of a specific juridical frame. Under a juridical point of view, in fact, EWS and in general all the activities of prevention need a careful regulation, mainly with regards to responsibility and possible compensation for damage caused by the implemented actions. A preventive alarm, in fact, has an active influence on infrastructures in control of public services which in turn will suffer suspensions or interruptions because of the early warning actions. From here it is necessary to possess accurate normative references related to the typology of structures or infrastructures upon which the activity of readiness acts; the progressive order of suspension of public services; the duration of these suspensions; the corporate bodies or administrations that are competent to assume such decisions; the actors responsible for the consequences of false alarm, missed or delayed alarms; the mechanisms of compensation for damage; the insurance systems; etc In the European Union EWS are often quoted as preventive methods of mitigation of the risk. Nevertheless, a juridical notion of EWS of general use is not available. In fact, EW is a concept that finds application in many different circles, each of which require specific adaptations, and may concern subjects for which the European Union doesn't have exclusive competence as may be the responsibility of the member states to assign the necessary regulations. In so far as the juridical arrangement of the EWS, this must be

  3. Edge Bioinformatics

    SciTech Connect

    Lo, Chien-Chi

    2015-08-03

    Edge Bioinformatics is a developmental bioinformatics and data management platform which seeks to supply laboratories with bioinformatics pipelines for analyzing data associated with common samples case goals. Edge Bioinformatics enables sequencing as a solution and forward-deployed situations where human-resources, space, bandwidth, and time are limited. The Edge bioinformatics pipeline was designed based on following USE CASES and specific to illumina sequencing reads. 1. Assay performance adjudication (PCR): Analysis of an existing PCR assay in a genomic context, and automated design of a new assay to resolve conflicting results; 2. Clinical presentation with extreme symptoms: Characterization of a known pathogen or co-infection with a. Novel emerging disease outbreak or b. Environmental surveillance

  4. Edge Bioinformatics

    2015-08-03

    Edge Bioinformatics is a developmental bioinformatics and data management platform which seeks to supply laboratories with bioinformatics pipelines for analyzing data associated with common samples case goals. Edge Bioinformatics enables sequencing as a solution and forward-deployed situations where human-resources, space, bandwidth, and time are limited. The Edge bioinformatics pipeline was designed based on following USE CASES and specific to illumina sequencing reads. 1. Assay performance adjudication (PCR): Analysis of an existing PCR assay in amore » genomic context, and automated design of a new assay to resolve conflicting results; 2. Clinical presentation with extreme symptoms: Characterization of a known pathogen or co-infection with a. Novel emerging disease outbreak or b. Environmental surveillance« less

  5. Ergatis: a web interface and scalable software system for bioinformatics workflows

    PubMed Central

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  6. A bioinformatics expert system linking functional data to anatomical outcomes in limb regeneration

    PubMed Central

    Lobo, Daniel; Feldman, Erica B.; Shah, Michelle; Malone, Taylor J.

    2014-01-01

    Abstract Amphibians and molting arthropods have the remarkable capacity to regenerate amputated limbs, as described by an extensive literature of experimental cuts, amputations, grafts, and molecular techniques. Despite a rich history of experimental effort, no comprehensive mechanistic model exists that can account for the pattern regulation observed in these experiments. While bioinformatics algorithms have revolutionized the study of signaling pathways, no such tools have heretofore been available to assist scientists in formulating testable models of large‐scale morphogenesis that match published data in the limb regeneration field. Major barriers to preventing an algorithmic approach are the lack of formal descriptions for experimental regenerative information and a repository to centralize storage and mining of functional data on limb regeneration. Establishing a new bioinformatics of shape would significantly accelerate the discovery of key insights into the mechanisms that implement complex regeneration. Here, we describe a novel mathematical ontology for limb regeneration to unambiguously encode phenotype, manipulation, and experiment data. Based on this formalism, we present the first centralized formal database of published limb regeneration experiments together with a user‐friendly expert system tool to facilitate its access and mining. These resources are freely available for the community and will assist both human biologists and artificial intelligence systems to discover testable, mechanistic models of limb regeneration. PMID:25729585

  7. Towards a career in bioinformatics

    PubMed Central

    2009-01-01

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation from 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 9-11, 2009 at Biopolis, Singapore. InCoB has actively engaged researchers from the area of life sciences, systems biology and clinicians, to facilitate greater synergy between these groups. To encourage bioinformatics students and new researchers, tutorials and student symposium, the Singapore Symposium on Computational Biology (SYMBIO) were organized, along with the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and the Clinical Bioinformatics (CBAS) Symposium. However, to many students and young researchers, pursuing a career in a multi-disciplinary area such as bioinformatics poses a Himalayan challenge. A collection to tips is presented here to provide signposts on the road to a career in bioinformatics. An overview of the application of bioinformatics to traditional and emerging areas, published in this supplement, is also presented to provide possible future avenues of bioinformatics investigation. A case study on the application of e-learning tools in undergraduate bioinformatics curriculum provides information on how to go impart targeted education, to sustain bioinformatics in the Asia-Pacific region. The next InCoB is scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. PMID:19958508

  8. Green genes: bioinformatics and systems-biology innovations drive algal biotechnology.

    PubMed

    Reijnders, Maarten J M F; van Heck, Ruben G A; Lam, Carolyn M C; Scaife, Mark A; dos Santos, Vitor A P Martins; Smith, Alison G; Schaap, Peter J

    2014-12-01

    Many species of microalgae produce hydrocarbons, polysaccharides, and other valuable products in significant amounts. However, large-scale production of algal products is not yet competitive against non-renewable alternatives from fossil fuel. Metabolic engineering approaches will help to improve productivity, but the exact metabolic pathways and the identities of the majority of the genes involved remain unknown. Recent advances in bioinformatics and systems-biology modeling coupled with increasing numbers of algal genome-sequencing projects are providing the means to address this. A multidisciplinary integration of methods will provide synergy for a systems-level understanding of microalgae, and thereby accelerate the improvement of industrially valuable strains. In this review we highlight recent advances and challenges to microalgal research and discuss future potential. PMID:25457388

  9. Green genes: bioinformatics and systems-biology innovations drive algal biotechnology.

    PubMed

    Reijnders, Maarten J M F; van Heck, Ruben G A; Lam, Carolyn M C; Scaife, Mark A; dos Santos, Vitor A P Martins; Smith, Alison G; Schaap, Peter J

    2014-12-01

    Many species of microalgae produce hydrocarbons, polysaccharides, and other valuable products in significant amounts. However, large-scale production of algal products is not yet competitive against non-renewable alternatives from fossil fuel. Metabolic engineering approaches will help to improve productivity, but the exact metabolic pathways and the identities of the majority of the genes involved remain unknown. Recent advances in bioinformatics and systems-biology modeling coupled with increasing numbers of algal genome-sequencing projects are providing the means to address this. A multidisciplinary integration of methods will provide synergy for a systems-level understanding of microalgae, and thereby accelerate the improvement of industrially valuable strains. In this review we highlight recent advances and challenges to microalgal research and discuss future potential.

  10. Specifying, Installing and Maintaining Built-Up and Modified Bitumen Roofing Systems.

    ERIC Educational Resources Information Center

    Hobson, Joseph W.

    2000-01-01

    Examines built-up, modified bitumen, and hybrid combinations of the two roofing systems and offers advise on how to assure high- quality performance and durability when using them. Included is a glossary of commercial roofing terms and asphalt roofing resources to aid in making decisions on roofing and systems product selection. (GR)

  11. Built But Not Used, Needed But Not Built: Ground System Guidance Based On Cassini-Huygens Experience

    NASA Technical Reports Server (NTRS)

    Larsen, Barbara S.

    2006-01-01

    These reflections share insight gleaned from Cassini-Huygens experience in supporting uplink operations tasks with software. Of particular interest are developed applications that were not widely adopted and tasks for which the appropriate application was not planned. After several years of operations, tasks are better understood providing a clearer picture of the mapping of requirements to applications. The impact on system design of the changing user profile due to distributed operations and greater participation of scientists in operations is also explored. Suggestions are made for improving the architecture, requirements, and design of future systems for uplink operations.

  12. Analyses of Brucella Pathogenesis, Host Immunity, and Vaccine Targets using Systems Biology and Bioinformatics

    PubMed Central

    He, Yongqun

    2011-01-01

    Brucella is a Gram-negative, facultative intracellular bacterium that causes zoonotic brucellosis in humans and various animals. Out of 10 classified Brucella species, B. melitensis, B. abortus, B. suis, and B. canis are pathogenic to humans. In the past decade, the mechanisms of Brucella pathogenesis and host immunity have been extensively investigated using the cutting edge systems biology and bioinformatics approaches. This article provides a comprehensive review of the applications of Omics (including genomics, transcriptomics, and proteomics) and bioinformatics technologies for the analysis of Brucella pathogenesis, host immune responses, and vaccine targets. Based on more than 30 sequenced Brucella genomes, comparative genomics is able to identify gene variations among Brucella strains that help to explain host specificity and virulence differences among Brucella species. Diverse transcriptomics and proteomics gene expression studies have been conducted to analyze gene expression profiles of wild type Brucella strains and mutants under different laboratory conditions. High throughput Omics analyses of host responses to infections with virulent or attenuated Brucella strains have been focused on responses by mouse and cattle macrophages, bovine trophoblastic cells, mouse and boar splenocytes, and ram buffy coat. Differential serum responses in humans and rams to Brucella infections have been analyzed using high throughput serum antibody screening technology. The Vaxign reverse vaccinology has been used to predict many Brucella vaccine targets. More than 180 Brucella virulence factors and their gene interaction networks have been identified using advanced literature mining methods. The recent development of community-based Vaccine Ontology and Brucellosis Ontology provides an efficient way for Brucella data integration, exchange, and computer-assisted automated reasoning. PMID:22919594

  13. A systems approach to resilience in the built environment: the case of Cuba.

    PubMed

    Lizarralde, Gonzalo; Valladares, Arturo; Olivera, Andres; Bornstein, Lisa; Gould, Kevin; Barenstein, Jennifer Duyne

    2015-01-01

    Through its capacity to evoke systemic adaptation before and after disasters, resilience has become a seductive theory in disaster management. Several studies have linked the concept with systems theory; however, they have been mostly based on theoretical models with limited empirical support. The study of the Cuban model of resilience sheds light on the variables that create systemic resilience in the built environment and its relations with the social and natural environments. Cuba is vulnerable to many types of hazard, yet the country's disaster management benefits from institutional, health and education systems that develop social capital, knowledge and other assets that support construction industry and housing development, systematic urban and regional planning, effective alerts, and evacuation plans. The Cuban political context is specific, but the study can nonetheless contribute to systemic improvements to the resilience of built environments in other contexts. PMID:25494958

  14. A systems approach to resilience in the built environment: the case of Cuba.

    PubMed

    Lizarralde, Gonzalo; Valladares, Arturo; Olivera, Andres; Bornstein, Lisa; Gould, Kevin; Barenstein, Jennifer Duyne

    2015-01-01

    Through its capacity to evoke systemic adaptation before and after disasters, resilience has become a seductive theory in disaster management. Several studies have linked the concept with systems theory; however, they have been mostly based on theoretical models with limited empirical support. The study of the Cuban model of resilience sheds light on the variables that create systemic resilience in the built environment and its relations with the social and natural environments. Cuba is vulnerable to many types of hazard, yet the country's disaster management benefits from institutional, health and education systems that develop social capital, knowledge and other assets that support construction industry and housing development, systematic urban and regional planning, effective alerts, and evacuation plans. The Cuban political context is specific, but the study can nonetheless contribute to systemic improvements to the resilience of built environments in other contexts.

  15. Autonomic Nervous System Responses to Viewing Green and Built Settings: Differentiating Between Sympathetic and Parasympathetic Activity.

    PubMed

    van den Berg, Magdalena M H E; Maas, Jolanda; Muller, Rianne; Braun, Anoek; Kaandorp, Wendy; van Lien, René; van Poppel, Mireille N M; van Mechelen, Willem; van den Berg, Agnes E

    2015-12-01

    This laboratory study explored buffering and recovery effects of viewing urban green and built spaces on autonomic nervous system activity. Forty-six students viewed photos of green and built spaces immediately following, and preceding acute stress induction. Simultaneously recorded electrocardiogram and impedance cardiogram signal was used to derive respiratory sinus arrhythmia (RSA) and pre-ejection period (PEP), indicators of respectively parasympathetic and sympathetic activity. The findings provide support for greater recovery after viewing green scenes, as marked by a stronger increase in RSA as a marker of parasympathetic activity. There were no indications for greater recovery after viewing green scenes in PEP as a marker of sympathetic activity, and there were also no indications of greater buffering effects of green space in neither RSA nor PEP. Overall, our findings are consistent with a predominant role of the parasympathetic nervous system in restorative effects of viewing green space. PMID:26694426

  16. Autonomic Nervous System Responses to Viewing Green and Built Settings: Differentiating Between Sympathetic and Parasympathetic Activity.

    PubMed

    van den Berg, Magdalena M H E; Maas, Jolanda; Muller, Rianne; Braun, Anoek; Kaandorp, Wendy; van Lien, René; van Poppel, Mireille N M; van Mechelen, Willem; van den Berg, Agnes E

    2015-12-01

    This laboratory study explored buffering and recovery effects of viewing urban green and built spaces on autonomic nervous system activity. Forty-six students viewed photos of green and built spaces immediately following, and preceding acute stress induction. Simultaneously recorded electrocardiogram and impedance cardiogram signal was used to derive respiratory sinus arrhythmia (RSA) and pre-ejection period (PEP), indicators of respectively parasympathetic and sympathetic activity. The findings provide support for greater recovery after viewing green scenes, as marked by a stronger increase in RSA as a marker of parasympathetic activity. There were no indications for greater recovery after viewing green scenes in PEP as a marker of sympathetic activity, and there were also no indications of greater buffering effects of green space in neither RSA nor PEP. Overall, our findings are consistent with a predominant role of the parasympathetic nervous system in restorative effects of viewing green space.

  17. Autonomic Nervous System Responses to Viewing Green and Built Settings: Differentiating Between Sympathetic and Parasympathetic Activity

    PubMed Central

    van den Berg, Magdalena M.H.E.; Maas, Jolanda; Muller, Rianne; Braun, Anoek; Kaandorp, Wendy; van Lien, René; van Poppel, Mireille N.M.; van Mechelen, Willem; van den Berg, Agnes E.

    2015-01-01

    This laboratory study explored buffering and recovery effects of viewing urban green and built spaces on autonomic nervous system activity. Forty-six students viewed photos of green and built spaces immediately following, and preceding acute stress induction. Simultaneously recorded electrocardiogram and impedance cardiogram signal was used to derive respiratory sinus arrhythmia (RSA) and pre-ejection period (PEP), indicators of respectively parasympathetic and sympathetic activity. The findings provide support for greater recovery after viewing green scenes, as marked by a stronger increase in RSA as a marker of parasympathetic activity. There were no indications for greater recovery after viewing green scenes in PEP as a marker of sympathetic activity, and there were also no indications of greater buffering effects of green space in neither RSA nor PEP. Overall, our findings are consistent with a predominant role of the parasympathetic nervous system in restorative effects of viewing green space. PMID:26694426

  18. Agile parallel bioinformatics workflow management using Pwrake

    PubMed Central

    2011-01-01

    Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability

  19. A fast job scheduling system for a wide range of bioinformatic applications.

    PubMed

    Boccia, Angelo; Busiello, Gianluca; Milanesi, Luciano; Paolella, Giovanni

    2007-06-01

    Bioinformatic tools are often used by researchers through interactive Web interfaces, resulting in a strong demand for computational resources. The tools are of different kind and range from simple, quick tasks, to complex analyses requiring minutes to hours of processing time and often longer than that. Batteries of computational nodes, such as those found in parallel clusters, provide a platform of choice for this application, especially when a relatively large number of concurrent requests is expected. Here, we describe a scheduling architecture operating at the application level, able to distribute jobs over a large number of hierarchically organized nodes. While not contrasting and peacefully living together with low-level scheduling software, the system takes advantage of tools, such as SQL servers, commonly used in Web applications, to produce low latency and performance which compares well and often surpasses that of more traditional, dedicated schedulers. The system provides the basic functionality necessary to node selection, task execution and service management and monitoring, and may combine loosely linked computational resources, such as those located in geographically distinct sites. PMID:17695750

  20. Design and implementation of a custom built optical projection tomography system.

    PubMed

    Wong, Michael D; Dazai, Jun; Walls, Johnathon R; Gale, Nicholas W; Henkelman, R Mark

    2013-01-01

    Optical projection tomography (OPT) is an imaging modality that has, in the last decade, answered numerous biological questions owing to its ability to view gene expression in 3 dimensions (3D) at high resolution for samples up to several cm(3). This has increased demand for a cabinet OPT system, especially for mouse embryo phenotyping, for which OPT was primarily designed for. The Medical Research Council (MRC) Technology group (UK) released a commercial OPT system, constructed by Skyscan, called the Bioptonics OPT 3001 scanner that was installed in a limited number of locations. The Bioptonics system has been discontinued and currently there is no commercial OPT system available. Therefore, a few research institutions have built their own OPT system, choosing parts and a design specific to their biological applications. Some of these custom built OPT systems are preferred over the commercial Bioptonics system, as they provide improved performance based on stable translation and rotation stages and up to date CCD cameras coupled with objective lenses of high numerical aperture, increasing the resolution of the images. Here, we present a detailed description of a custom built OPT system that is robust and easy to build and install. Included is a hardware parts list, instructions for assembly, a description of the acquisition software and a free download site, and methods for calibration. The described OPT system can acquire a full 3D data set in 10 minutes at 6.7 micron isotropic resolution. The presented guide will hopefully increase adoption of OPT throughout the research community, for the OPT system described can be implemented by personnel with minimal expertise in optics or engineering who have access to a machine shop.

  1. Design and implementation of a custom built optical projection tomography system.

    PubMed

    Wong, Michael D; Dazai, Jun; Walls, Johnathon R; Gale, Nicholas W; Henkelman, R Mark

    2013-01-01

    Optical projection tomography (OPT) is an imaging modality that has, in the last decade, answered numerous biological questions owing to its ability to view gene expression in 3 dimensions (3D) at high resolution for samples up to several cm(3). This has increased demand for a cabinet OPT system, especially for mouse embryo phenotyping, for which OPT was primarily designed for. The Medical Research Council (MRC) Technology group (UK) released a commercial OPT system, constructed by Skyscan, called the Bioptonics OPT 3001 scanner that was installed in a limited number of locations. The Bioptonics system has been discontinued and currently there is no commercial OPT system available. Therefore, a few research institutions have built their own OPT system, choosing parts and a design specific to their biological applications. Some of these custom built OPT systems are preferred over the commercial Bioptonics system, as they provide improved performance based on stable translation and rotation stages and up to date CCD cameras coupled with objective lenses of high numerical aperture, increasing the resolution of the images. Here, we present a detailed description of a custom built OPT system that is robust and easy to build and install. Included is a hardware parts list, instructions for assembly, a description of the acquisition software and a free download site, and methods for calibration. The described OPT system can acquire a full 3D data set in 10 minutes at 6.7 micron isotropic resolution. The presented guide will hopefully increase adoption of OPT throughout the research community, for the OPT system described can be implemented by personnel with minimal expertise in optics or engineering who have access to a machine shop. PMID:24023880

  2. Tank Monitoring and Document control System (TMACS) As Built Software Design Document

    SciTech Connect

    GLASSCOCK, J.A.

    2000-01-27

    This document describes the software design for the Tank Monitor and Control System (TMACS). This document captures the existing as-built design of TMACS as of November 1999. It will be used as a reference document to the system maintainers who will be maintaining and modifying the TMACS functions as necessary. The heart of the TMACS system is the ''point-processing'' functionality where a sample value is received from the field sensors and the value is analyzed, logged, or alarmed as required. This Software Design Document focuses on the point-processing functions.

  3. IMGT, a system and an ontology that bridge biological and computational spheres in bioinformatics.

    PubMed

    Lefranc, Marie-Paule; Giudicelli, Véronique; Regnier, Laetitia; Duroux, Patrice

    2008-07-01

    IMGT, the international ImMunoGeneTics information system (http://imgt.cines.fr), is the reference in immunogenetics and immunoinformatics. IMGT standardizes and manages the complex immunogenetic data that include the immunoglobulins (IG) or antibodies, the T cell receptors (TR), the major histocompatibility complex (MHC) and the related proteins of the immune system (RPI), which belong to the immunoglobulin superfamily (IgSF) and the MHC superfamily (MhcSF). The accuracy and consistency of IMGT data and the coherence between the different IMGT components (databases, tools and Web resources) are based on IMGT-ONTOLOGY, the first ontology for immunogenetics and immunoinformatics. IMGT-ONTOLOGY manages the immunogenetics knowledge through diverse facets relying on seven axioms, 'IDENTIFICATION', 'DESCRIPTION', 'CLASSIFICATION', 'NUMEROTATION', 'LOCALIZATION', 'ORIENTATION' and 'OBTENTION', that postulate that objects, processes and relations have to be identified, described, classified, numerotated, localized, orientated, and that the way they are obtained has to be determined. These axioms constitute the Formal IMGT-ONTOLOGY, also designated as IMGT-Kaleidoscope. These axioms have been essential for the conceptualization of the molecular immunogenetics knowledge and for the creation of IMGT. Indeed all the components of the IMGT integrated system have been developed, based on standardized concepts and relations, thus allowing IMGT to bridge biological and computational spheres in bioinformatics. The same axioms can be used to generate concepts for multi-scale level approaches at the molecule, cell, tissue, organ, organism or population level, emphasizing the generalization of the application domain. In that way the Formal IMGT-ONTOLOGY represents a paradigm for the elaboration of ontologies in system biology.

  4. Company strategies for using bioinformatics.

    PubMed

    Bains, W

    1996-08-01

    Bioinformatics enables biotechnology companies to access and analyse their growing databases of experimental results, and to exploit public data from genome programmes and other sources. Traditionally occupying the domain of a 'guru' supplying answers to infrequent research questions, corporate bioinformatics is breaking down under the flood of data. New, more robust, professional and expandable systems will give scientists effective access to new tools. This review outlines how companies have evolved beyond the 'guru', and have organized their bioinformatics by acquiring or developing bioinformatics resources. It also describes why the biologist must be central to this process, and why this is a problem for computer professionals to solve, not for 'gurus'.

  5. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  6. Expert systems built by the Expert: An evaluation of OPS5

    NASA Technical Reports Server (NTRS)

    Jackson, Robert

    1987-01-01

    Two expert systems were written in OPS5 by the expert, a Ph.D. astronomer with no prior experience in artificial intelligence or expert systems, without the use of a knowledge engineer. The first system was built from scratch and uses 146 rules to check for duplication of scientific information within a pool of prospective observations. The second system was grafted onto another expert system and uses 149 additional rules to estimate the spacecraft and ground resources consumed by a set of prospective observations. The small vocabulary, the IF this occurs THEN do that logical structure of OPS5, and the ability to follow program execution allowed the expert to design and implement these systems with only the data structures and rules of another OPS5 system as an example. The modularity of the rules in OPS5 allowed the second system to modify the rulebase of the system onto which it was grafted without changing the code or the operation of that system. These experiences show that experts are able to develop their own expert systems due to the ease of programming and code reusability in OPS5.

  7. Finding the next-best scanner position for as-built modeling of piping systems

    NASA Astrophysics Data System (ADS)

    Kawashima, K.; Yamanishi, S.; Kanai, S.; Date, H.

    2014-06-01

    Renovation of plant equipment of petroleum refineries or chemical factories have recently been frequent, and the demand for 3D asbuilt modelling of piping systems is increasing rapidly. Terrestrial laser scanners are used very often in the measurement for as-built modelling. However, the tangled structures of the piping systems results in complex occluded areas, and these areas must be captured from different scanner positions. For efficient and exhaustive measurement of the piping system, the scanner should be placed at optimum positions where the occluded parts of the piping system are captured as much as possible in less scans. However, this "nextbest" scanner positions are usually determined by experienced operators, and there is no guarantee that these positions fulfil the optimum condition. Therefore, this paper proposes a computer-aided method of the optimal sequential view planning for object recognition in plant piping systems using a terrestrial laser scanner. In the method, a sequence of next-best positions of a terrestrial laser scanner specialized for as-built modelling of piping systems can be found without any a priori information of piping objects. Different from the conventional approaches for the next-best-view (NBV) problem, in the proposed method, piping objects in the measured point clouds are recognized right after an every scan, local occluded spaces occupied by the unseen piping systems are then estimated, and the best scanner position can be found so as to minimize these local occluded spaces. The simulation results show that our proposed method outperforms a conventional approach in recognition accuracy, efficiency and computational time.

  8. Initial clinical testing of a multi-spectral imaging system built on a smartphone platform

    NASA Astrophysics Data System (ADS)

    Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David

    2016-03-01

    Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.

  9. MEIGO: an open-source software suite based on metaheuristics for global optimization in systems biology and bioinformatics

    PubMed Central

    2014-01-01

    Background Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. Results We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. Conclusions MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods. PMID:24885957

  10. Bioinformatic indications that COPI- and clathrin-based transport systems are not present in chloroplasts: an Arabidopsis model.

    PubMed

    Lindquist, Emelie; Alezzawi, Mohamed; Aronsson, Henrik

    2014-01-01

    Coated vesicle transport occurs in the cytosol of yeast, mammals and plants. It consists of three different transport systems, the COPI, COPII and clathrin coated vesicles (CCV), all of which participate in the transfer of proteins and lipids between different cytosolic compartments. There are also indications that chloroplasts have a vesicle transport system. Several putative chloroplast-localized proteins, including CPSAR1 and CPRabA5e with similarities to cytosolic COPII transport-related proteins, were detected in previous experimental and bioinformatics studies. These indications raised the hypothesis that a COPI- and/or CCV-related system may be present in chloroplasts, in addition to a COPII-related system. To test this hypothesis we bioinformatically searched for chloroplast proteins that may have similar functions to known cytosolic COPI and CCV components in the model plants Arabidopsis thaliana and Oryza sativa (subsp. japonica) (rice). We found 29 such proteins, based on domain similarity, in Arabidopsis, and 14 in rice. However, many components could not be identified and among the identified most have assigned roles that are not related to either COPI or CCV transport. We conclude that COPII is probably the only active vesicle system in chloroplasts, at least in the model plants. The evolutionary implications of the findings are discussed.

  11. Bioinformatic Indications That COPI- and Clathrin-Based Transport Systems Are Not Present in Chloroplasts: An Arabidopsis Model

    PubMed Central

    Aronsson, Henrik

    2014-01-01

    Coated vesicle transport occurs in the cytosol of yeast, mammals and plants. It consists of three different transport systems, the COPI, COPII and clathrin coated vesicles (CCV), all of which participate in the transfer of proteins and lipids between different cytosolic compartments. There are also indications that chloroplasts have a vesicle transport system. Several putative chloroplast-localized proteins, including CPSAR1 and CPRabA5e with similarities to cytosolic COPII transport-related proteins, were detected in previous experimental and bioinformatics studies. These indications raised the hypothesis that a COPI- and/or CCV-related system may be present in chloroplasts, in addition to a COPII-related system. To test this hypothesis we bioinformatically searched for chloroplast proteins that may have similar functions to known cytosolic COPI and CCV components in the model plants Arabidopsis thaliana and Oryza sativa (subsp. japonica) (rice). We found 29 such proteins, based on domain similarity, in Arabidopsis, and 14 in rice. However, many components could not be identified and among the identified most have assigned roles that are not related to either COPI or CCV transport. We conclude that COPII is probably the only active vesicle system in chloroplasts, at least in the model plants. The evolutionary implications of the findings are discussed. PMID:25137124

  12. An Undergraduate-Built Prototype Altitude Determination System (PADS) for High Altitude Research Balloons.

    NASA Astrophysics Data System (ADS)

    Verner, E.; Bruhweiler, F. C.; Abot, J.; Casarotto, V.; Dichoso, J.; Doody, E.; Esteves, F.; Morsch Filho, E.; Gonteski, D.; Lamos, M.; Leo, A.; Mulder, N.; Matubara, F.; Schramm, P.; Silva, R.; Quisberth, J.; Uritsky, G.; Kogut, A.; Lowe, L.; Mirel, P.; Lazear, J.

    2014-12-01

    In this project a multi-disciplinary undergraduate team from CUA, comprising majors in Physics, Mechanical Engineering, Electrical Engineering, and Biology, design, build, test, fly, and analyze the data from a prototype attitude determination system (PADS). The goal of the experiment is to determine if an inexpensive attitude determination system could be built for high altitude research balloons using MEMS gyros. PADS is a NASA funded project, built by students with the cooperation of CUA faculty, Verner, Bruhweiler, and Abot, along with the contributed expertise of researchers and engineers at NASA/GSFC, Kogut, Lowe, Mirel, and Lazear. The project was initiated through a course taught in CUA's School of Engineering, which was followed by a devoted effort by students during the summer of 2014. The project is an experiment to use 18 MEMS gyros, similar to those used in many smartphones, to produce an averaged positional error signal that could be compared with the motion of the fixed optical system as recorded through a string of optical images of stellar fields to be stored on a hard drive flown with the experiment. The optical system, camera microprocessor, and hard drive are enclosed in a pressure vessel, which maintains approximately atmospheric pressure throughout the balloon flight. The experiment uses multiple microprocessors to control the camera exposures, record gyro data, and provide thermal control. CUA students also participated in NASA-led design reviews. Four students traveled to NASA's Columbia Scientific Balloon Facility in Palestine, Texas to integrate PADS into a large balloon gondola containing other experiments, before being shipped, then launched in mid-August at Ft. Sumner, New Mexico. The payload is to fly at a float altitude of 40-45,000 m, and the flight last approximately 15 hours. The payload is to return to earth by parachute and the retrieved data are to be analyzed by CUA undergraduates. A description of the instrument is presented

  13. Enabling high-throughput data management systems biology: The Bioinformatics Resource Manager

    SciTech Connect

    Shah, Anuj R.; Singhal, Mudita; Klicker, Kyle R.; Stephan, Eric G.; Wiley, H. S.; Waters, Katrina M.

    2007-02-25

    The Bioinformatics Resource Manager (BRM) is a problem-solving environment that provides the user with data retrieval, management, analysis and visualization capabilities through all aspects of an experimental study. Designed in collaboration with biologists, BRM simplifies the integration of experimental data across platforms and with other publicly available information from external data sources. An analysis pipeline is facilitated within BRM by the seamless connectivity of user data with visual analytics tools, through reformatting of the data for easy import. BRM is developed using JAVATM and other open source technologies so that it can be freely distributable.

  14. Measurement of airflow and pressure characteristics of a fan built in a car ventilation system

    NASA Astrophysics Data System (ADS)

    Pokorný, Jan; Poláček, Filip; Fojtlín, Miloš; Fišer, Jan; Jícha, Miroslav

    2016-03-01

    The aim of this study was to identify a set of operating points of a fan built in ventilation system of our test car. These operating points are given by the fan pressure characteristics and are defined by a pressure drop of the HVAC system (air ducts and vents) and volumetric flow rate of ventilation air. To cover a wide range of pressure drops situations, four cases of vent flaps setup were examined: (1) all vents opened, (2) only central vents closed (3) only central vents opened and (4) all vents closed. To cover a different volumetric flows, the each case was measured at least for four different speeds of fan defined by the fan voltage. It was observed that the pressure difference of the fan is proportional to the fan voltage and strongly depends on the throttling of the air distribution system by the settings of the vents flaps. In case of our test car we identified correlations between volumetric flow rate of ventilation air, fan pressure difference and fan voltage. These correlations will facilitate and reduce time costs of the following experiments with this test car.

  15. Crowdsourcing for bioinformatics

    PubMed Central

    Good, Benjamin M.; Su, Andrew I.

    2013-01-01

    Motivation: Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Results: Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume ‘microtasks’ and systems for solving high-difficulty ‘megatasks’. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches. Contact: bgood@scripps.edu PMID:23782614

  16. Performance of the prototype gas recirculation system with built-in RGA for INO RPC system

    NASA Astrophysics Data System (ADS)

    Bhuyan, M.; Datar, V. M.; Joshi, A.; Kalmani, S. D.; Mondal, N. K.; Rahman, M. A.; Satyanarayana, B.; Verma, P.

    2012-01-01

    An open loop gas recovery and recirculation system has been developed for the INO RPC system. The gas mixture coming from RPC exhaust is first desiccated by passing through molecular sieve (3 Å+4 Å). Subsequent scrubbing over basic active alumina removes toxic and acidic contaminants. The Isobutane and Freon are then separated by diffusion and liquefied by fractional condensation by cooling up to -26C. A Residual Gas Analyser (RGA) is being used in the loop to study the performance of the recirculation system. The results of the RGA analysis will be discussed.

  17. Experimental Identification of Smartphones Using Fingerprints of Built-In Micro-Electro Mechanical Systems (MEMS)

    PubMed Central

    Baldini, Gianmarco; Steri, Gary; Dimc, Franc; Giuliani, Raimondo; Kamnik, Roman

    2016-01-01

    The correct identification of smartphones has various applications in the field of security or the fight against counterfeiting. As the level of sophistication in counterfeit electronics increases, detection procedures must become more accurate but also not destructive for the smartphone under testing. Some components of the smartphone are more likely to reveal their authenticity even without a physical inspection, since they are characterized by hardware fingerprints detectable by simply examining the data they provide. This is the case of MEMS (Micro Electro-Mechanical Systems) components like accelerometers and gyroscopes, where tiny differences and imprecisions in the manufacturing process determine unique patterns in the data output. In this paper, we present the experimental evaluation of the identification of smartphones through their built-in MEMS components. In our study, three different phones of the same model are subject to repeatable movements (composing a repeatable scenario) using an high precision robotic arm. The measurements from MEMS for each repeatable scenario are collected and analyzed. The identification algorithm is based on the extraction of the statistical features of the collected data for each scenario. The features are used in a support vector machine (SVM) classifier to identify the smartphone. The results of the evaluation are presented for different combinations of features and Inertial Measurement Unit (IMU) outputs, which show that detection accuracy of higher than 90% is achievable. PMID:27271630

  18. Experimental Identification of Smartphones Using Fingerprints of Built-In Micro-Electro Mechanical Systems (MEMS).

    PubMed

    Baldini, Gianmarco; Steri, Gary; Dimc, Franc; Giuliani, Raimondo; Kamnik, Roman

    2016-01-01

    The correct identification of smartphones has various applications in the field of security or the fight against counterfeiting. As the level of sophistication in counterfeit electronics increases, detection procedures must become more accurate but also not destructive for the smartphone under testing. Some components of the smartphone are more likely to reveal their authenticity even without a physical inspection, since they are characterized by hardware fingerprints detectable by simply examining the data they provide. This is the case of MEMS (Micro Electro-Mechanical Systems) components like accelerometers and gyroscopes, where tiny differences and imprecisions in the manufacturing process determine unique patterns in the data output. In this paper, we present the experimental evaluation of the identification of smartphones through their built-in MEMS components. In our study, three different phones of the same model are subject to repeatable movements (composing a repeatable scenario) using an high precision robotic arm. The measurements from MEMS for each repeatable scenario are collected and analyzed. The identification algorithm is based on the extraction of the statistical features of the collected data for each scenario. The features are used in a support vector machine (SVM) classifier to identify the smartphone. The results of the evaluation are presented for different combinations of features and Inertial Measurement Unit (IMU) outputs, which show that detection accuracy of higher than 90% is achievable. PMID:27271630

  19. Experimental Identification of Smartphones Using Fingerprints of Built-In Micro-Electro Mechanical Systems (MEMS).

    PubMed

    Baldini, Gianmarco; Steri, Gary; Dimc, Franc; Giuliani, Raimondo; Kamnik, Roman

    2016-06-03

    The correct identification of smartphones has various applications in the field of security or the fight against counterfeiting. As the level of sophistication in counterfeit electronics increases, detection procedures must become more accurate but also not destructive for the smartphone under testing. Some components of the smartphone are more likely to reveal their authenticity even without a physical inspection, since they are characterized by hardware fingerprints detectable by simply examining the data they provide. This is the case of MEMS (Micro Electro-Mechanical Systems) components like accelerometers and gyroscopes, where tiny differences and imprecisions in the manufacturing process determine unique patterns in the data output. In this paper, we present the experimental evaluation of the identification of smartphones through their built-in MEMS components. In our study, three different phones of the same model are subject to repeatable movements (composing a repeatable scenario) using an high precision robotic arm. The measurements from MEMS for each repeatable scenario are collected and analyzed. The identification algorithm is based on the extraction of the statistical features of the collected data for each scenario. The features are used in a support vector machine (SVM) classifier to identify the smartphone. The results of the evaluation are presented for different combinations of features and Inertial Measurement Unit (IMU) outputs, which show that detection accuracy of higher than 90% is achievable.

  20. Recommendation Systems for Geoscience Data Portals Built by Analyzing Usage Patterns

    NASA Astrophysics Data System (ADS)

    Crosby, C.; Nandigam, V.; Baru, C.

    2009-04-01

    Since its launch five years ago, the National Science Foundation-funded GEON Project (www.geongrid.org) has been providing access to a variety of geoscience data sets such as geologic maps and other geographic information system (GIS)-oriented data, paleontologic databases, gravity and magnetics data and LiDAR topography via its online portal interface. In addition to data, the GEON Portal also provides web-based tools and other resources that enable users to process and interact with data. Examples of these tools include functions to dynamically map and integrate GIS data, compute synthetic seismograms, and to produce custom digital elevation models (DEMs) with user defined parameters such as resolution. The GEON portal built on the Gridsphere-portal framework allows us to capture user interaction with the system. In addition to the site access statistics captured by tools like Google Analystics which capture hits per unit time, search key words, operating systems, browsers, and referring sites, we also record additional statistics such as which data sets are being downloaded and in what formats, processing parameters, and navigation pathways through the portal. With over four years of data now available from the GEON Portal, this record of usage is a rich resource for exploring how earth scientists discover and utilize online data sets. Furthermore, we propose that this data could ultimately be harnessed to optimize the way users interact with the data portal, design intelligent processing and data management systems, and to make recommendations on algorithm settings and other available relevant data. The paradigm of integrating popular and commonly used patterns to make recommendations to a user is well established in the world of e-commerce where users receive suggestions on books, music and other products that they may find interesting based on their website browsing and purchasing history, as well as the patterns of fellow users who have made similar

  1. Experiences with Testing the Largest Ground System NASA Has Ever Built

    NASA Technical Reports Server (NTRS)

    Lehtonen, Ken; Messerly, Robert

    2003-01-01

    In the 1980s, the National Aeronautics and Space Administration (NASA) embarked upon a major Earth-focused program called Mission to Planet Earth. The Goddard Space Flight Center (GSFC) was selected to manage and develop a key component - the Earth Observing System (EOS). The EOS consisted of four major missions designed to monitor the Earth. The missions included 4 spacecraft. Terra (launched December 1999), Aqua (launched May 2002), ICESat (Ice, Cloud, and Land Elevation Satellite, launched January 2003), and Aura (scheduled for launch January 2004). The purpose of these missions was to provide support for NASA s long-term research effort for determining how human-induced and natural changes affect our global environment. The EOS Data and Information System (EOSDIS), a globally distributed, large-scale scientific system, was built to support EOS. Its primary function is to capture, collect, process, and distribute the most voluminous set of remotely sensed scientific data to date estimated to be 350 Gbytes per day. The EOSDIS is composed of a diverse set of elements with functional capabilities that require the implementation of a complex set of computers, high-speed networks, mission-unique equipment, and associated Information Technology (IT) software along with mission-specific software. All missions are constrained by schedule, budget, and staffing resources, and rigorous testing has been shown to be critical to the success of each mission. This paper addresses the challenges associated with the planning, test definition. resource scheduling, execution, and discrepancy reporting involved in the mission readiness testing of a ground system on the scale of EOSDIS. The size and complexity of the mission systems supporting the Aqua flight operations, for example, combined with the limited resources available, prompted the project to challenge the prevailing testing culture. The resulting success of the Aqua Mission Readiness Testing (MRT) program was due in no

  2. Influence of various alternative bedding materials on pododermatitis in broilers raised in a built-up litter system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Broilers in the United States are frequently raised on built-up litter systems, primarily bedded with pine wood chips (shavings) or sawdust. There is continuing interest in alternative bedding materials as pine products are often in short supply and prices rise accordingly. Alternative bedding mat...

  3. Using Geographic Information Systems (GIS) to assess the role of the built environment in influencing obesity: a glossary.

    PubMed

    Thornton, Lukar E; Pearce, Jamie R; Kavanagh, Anne M

    2011-07-01

    Features of the built environment are increasingly being recognised as potentially important determinants of obesity. This has come about, in part, because of advances in methodological tools such as Geographic Information Systems (GIS). GIS has made the procurement of data related to the built environment easier and given researchers the flexibility to create a new generation of environmental exposure measures such as the travel time to the nearest supermarket or calculations of the amount of neighbourhood greenspace. Given the rapid advances in the availability of GIS data and the relative ease of use of GIS software, a glossary on the use of GIS to assess the built environment is timely. As a case study, we draw on aspects the food and physical activity environments as they might apply to obesity, to define key GIS terms related to data collection, concepts, and the measurement of environmental features.

  4. Pattern recognition in bioinformatics.

    PubMed

    de Ridder, Dick; de Ridder, Jeroen; Reinders, Marcel J T

    2013-09-01

    Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained.

  5. Experiences with workflows for automating data-intensive bioinformatics.

    PubMed

    Spjuth, Ola; Bongcam-Rudloff, Erik; Hernández, Guillermo Carrasco; Forer, Lukas; Giovacchini, Mario; Guimera, Roman Valls; Kallio, Aleksi; Korpelainen, Eija; Kańduła, Maciej M; Krachunov, Milko; Kreil, David P; Kulev, Ognyan; Łabaj, Paweł P; Lampa, Samuel; Pireddu, Luca; Schönherr, Sebastian; Siretskiy, Alexey; Vassilev, Dimitar

    2015-01-01

    High-throughput technologies, such as next-generation sequencing, have turned molecular biology into a data-intensive discipline, requiring bioinformaticians to use high-performance computing resources and carry out data management and analysis tasks on large scale. Workflow systems can be useful to simplify construction of analysis pipelines that automate tasks, support reproducibility and provide measures for fault-tolerance. However, workflow systems can incur significant development and administration overhead so bioinformatics pipelines are often still built without them. We present the experiences with workflows and workflow systems within the bioinformatics community participating in a series of hackathons and workshops of the EU COST action SeqAhead. The organizations are working on similar problems, but we have addressed them with different strategies and solutions. This fragmentation of efforts is inefficient and leads to redundant and incompatible solutions. Based on our experiences we define a set of recommendations for future systems to enable efficient yet simple bioinformatics workflow construction and execution. PMID:26282399

  6. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Xiang, Fang; Ningqiu, Li; Xiaozhe, Fu; Kaibin, Li; Qiang, Lin; Lihui, Liu; Cunbin, Shi; Shuqin, Wu

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects. PMID:26351170

  7. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Xiang, Fang; Ningqiu, Li; Xiaozhe, Fu; Kaibin, Li; Qiang, Lin; Lihui, Liu; Cunbin, Shi; Shuqin, Wu

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.

  8. Bioinformatics: promises and progress.

    PubMed

    Gupta, Shipra; Misra, Gauri; Khurana, S M Paul

    2015-01-01

    Bioinformatics is a multidisciplinary science that solves and analyzes biological problems. With the quantum explosion in biomedical data, the demand of bioinformatics has increased gradually. Present paper provides an overview of various ways through which the biologists or biological researchers in the domain of neurology, structural and functional biology, evolutionary biology, clinical science, etc., use bioinformatics applications for data analysis to summarise their research. A new perspective is used to classify the knowledge available in the field thus will help general audience to understand the application of bioinformatics.

  9. APEX - a Petri net process modeling tool built on a discrete-event simulation system

    SciTech Connect

    Gish, J.W.

    1996-12-31

    APEX, the Animated Process Experimentation tool, provides a capability for defining, simulating and animating process models. Primarily constructed for the modeling and analysis of software process models, we have found that APEX is much more broadly applicable and is suitable for process modeling tasks outside the domain of software processes. APEX has been constructed as a library of simulation blocks that implement timed hierarchical colored Petri Nets. These Petri Net blocks operate in conjunction with EXTEND, a general purpose continuous and discrete-event simulation tool. EXTEND provides a flexible, powerful and extensible environment with features particularly suitable for the modeling of complex processes. APEX`s Petri Net block additions to EXTEND provide an inexpensive capability with well-defined and easily understood semantics that is a powerful, easy to use, flexible means to engage in process modeling and evaluation. The vast majority of software process research has focused on the enactment of software processes. Little has been said about the actual creation and evaluation of software process models necessary to support enactment. APEX has been built by the Software Engineering Process Technology Project at GTE Laboratories which has been focusing on this neglected area of process model definition and analysis. We have constructed high-level software lifecycle models, a set of models that demonstrate differences between four levels of the SEI Capability Maturity Model (CMM), customer care process models, as well as models involving more traditional synchronization and coordination problems such as producer-consumer and 2-phase commit. APEX offers a unique blend of technology from two different disciplines: discrete-event simulation and Petri Net modeling. Petri Nets provide a well-defined and rich semantics in a simple, easy to understand notation. The simulation framework allows for execution, animation, and measurement of the resultant models.

  10. A knowledge-based decision support system in bioinformatics: an application to protein complex extraction

    PubMed Central

    2013-01-01

    Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995

  11. A Built-In System of Evaluation for Reform Projects and Programmes in Education.

    ERIC Educational Resources Information Center

    Dave, Ravindra H.

    1980-01-01

    An EIPOL grid which combines five major dimensions of a broad-based evaluation system with different steps of a project cycle provides a basic operational framework for designing and adopting a more functional system of reform evaluation. (Editor)

  12. The House That TRACES Built: A Conceptual Model of Service Delivery Systems and Implications for Change.

    ERIC Educational Resources Information Center

    Schalock, Mark D.; And Others

    1994-01-01

    This article develops a framework for a common definition of system and system change drawing heavily from current understandings within chaos theory, quantum physics, and self-ordering systems. The framework is discussed in the context of TRACES, the national technical assistance agency for students with deaf blindness, and service delivery…

  13. Computational intelligence techniques in bioinformatics.

    PubMed

    Hassanien, Aboul Ella; Al-Shammari, Eiman Tamah; Ghali, Neveen I

    2013-12-01

    Computational intelligence (CI) is a well-established paradigm with current systems having many of the characteristics of biological computers and capable of performing a variety of tasks that are difficult to do using conventional techniques. It is a methodology involving adaptive mechanisms and/or an ability to learn that facilitate intelligent behavior in complex and changing environments, such that the system is perceived to possess one or more attributes of reason, such as generalization, discovery, association and abstraction. The objective of this article is to present to the CI and bioinformatics research communities some of the state-of-the-art in CI applications to bioinformatics and motivate research in new trend-setting directions. In this article, we present an overview of the CI techniques in bioinformatics. We will show how CI techniques including neural networks, restricted Boltzmann machine, deep belief network, fuzzy logic, rough sets, evolutionary algorithms (EA), genetic algorithms (GA), swarm intelligence, artificial immune systems and support vector machines, could be successfully employed to tackle various problems such as gene expression clustering and classification, protein sequence classification, gene selection, DNA fragment assembly, multiple sequence alignment, and protein function prediction and its structure. We discuss some representative methods to provide inspiring examples to illustrate how CI can be utilized to address these problems and how bioinformatics data can be characterized by CI. Challenges to be addressed and future directions of research are also presented and an extensive bibliography is included. PMID:23891719

  14. Bioinformatics, genomics and evolution of non-flagellar type-III secretion systems: a Darwinian perspective.

    PubMed

    Pallen, Mark J; Beatson, Scott A; Bailey, Christopher M

    2005-04-01

    We review the biology of non-flagellar type-III secretion systems from a Darwinian perspective, highlighting the themes of evolution, conservation, variation and decay. The presence of these systems in environmental organisms such as Myxococcus, Desulfovibrio and Verrucomicrobium hints at roles beyond virulence. We review newly discovered sequence homologies (e.g., YopN/TyeA and SepL). We discuss synapomorphies that might be useful in formulating a taxonomy of type-III secretion. The problem of information overload is likely to be ameliorated by launch of a web site devoted to the comparative biology of type-III secretion ().

  15. BioSig: A bioinformatic system for studying the mechanism of intra-cell signaling

    SciTech Connect

    Parvin, B.; Cong, G.; Fontenay, G.; Taylor, J.; Henshall, R.; Barcellos-Hoff, M.H.

    2000-12-15

    Mapping inter-cell signaling pathways requires an integrated view of experimental and informatic protocols. BioSig provides the foundation of cataloging inter-cell responses as a function of particular conditioning, treatment, staining, etc. for either in vivo or in vitro experiments. This paper outlines the system architecture, a functional data model for representing experimental protocols, algorithms for image analysis, and the required statistical analysis. The architecture provides remote shared operation of an inverted optical microscope, and couples instrument operation with images acquisition and annotation. The information is stored in an object-oriented database. The algorithms extract structural information such as morphology and organization, and map it to functional information such as inter-cellular responses. An example of usage of this system is included.

  16. An object-oriented programming system for the integration of internet-based bioinformatics resources.

    PubMed

    Beveridge, Allan

    2006-01-01

    The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.

  17. Bioinformatics meets parasitology.

    PubMed

    Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B

    2012-05-01

    The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention.

  18. Biggest challenges in bioinformatics

    PubMed Central

    Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen

    2013-01-01

    The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event. PMID:23492829

  19. Geochemistry of rare earth elements in a passive treatment system built for acid mine drainage remediation.

    PubMed

    Prudêncio, Maria Isabel; Valente, Teresa; Marques, Rosa; Sequeira Braga, Maria Amália; Pamplona, Jorge

    2015-11-01

    Rare earth elements (REE) were used to assess attenuation processes in a passive system for acid mine drainage treatment (Jales, Portugal). Hydrochemical parameters and REE contents in water, soils and sediments were obtained along the treatment system, after summer and winter. A decrease of REE contents in the water resulting from the interaction with limestone after summer occurs; in the wetlands REE are significantly released by the soil particles to the water. After winter, a higher water dynamics favors the AMD treatment effectiveness and performance since REE contents decrease along the system; La and Ce are preferentially sequestered by ochre sludge but released to the water in the wetlands, influencing the REE pattern of the creek water. Thus, REE fractionation occurs in the passive treatment systems and can be used as tracer to follow up and understand the geochemical processes that promote the remediation of AMD.

  20. Geochemistry of rare earth elements in a passive treatment system built for acid mine drainage remediation.

    PubMed

    Prudêncio, Maria Isabel; Valente, Teresa; Marques, Rosa; Sequeira Braga, Maria Amália; Pamplona, Jorge

    2015-11-01

    Rare earth elements (REE) were used to assess attenuation processes in a passive system for acid mine drainage treatment (Jales, Portugal). Hydrochemical parameters and REE contents in water, soils and sediments were obtained along the treatment system, after summer and winter. A decrease of REE contents in the water resulting from the interaction with limestone after summer occurs; in the wetlands REE are significantly released by the soil particles to the water. After winter, a higher water dynamics favors the AMD treatment effectiveness and performance since REE contents decrease along the system; La and Ce are preferentially sequestered by ochre sludge but released to the water in the wetlands, influencing the REE pattern of the creek water. Thus, REE fractionation occurs in the passive treatment systems and can be used as tracer to follow up and understand the geochemical processes that promote the remediation of AMD. PMID:26247412

  1. Component-Based Approach for Educating Students in Bioinformatics

    ERIC Educational Resources Information Center

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  2. A Wearable Context-Aware ECG Monitoring System Integrated with Built-in Kinematic Sensors of the Smartphone

    PubMed Central

    Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye

    2015-01-01

    Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user’s physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone. PMID:25996508

  3. A Wearable Context-Aware ECG Monitoring System Integrated with Built-in Kinematic Sensors of the Smartphone.

    PubMed

    Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye

    2015-05-19

    Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user's physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone.

  4. A Wearable Context-Aware ECG Monitoring System Integrated with Built-in Kinematic Sensors of the Smartphone.

    PubMed

    Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye

    2015-01-01

    Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user's physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone. PMID:25996508

  5. Evolutionary dynamics of RNA-like replicator systems: A bioinformatic approach to the origin of life.

    PubMed

    Takeuchi, Nobuto; Hogeweg, Paulien

    2012-09-01

    We review computational studies on prebiotic evolution, focusing on informatic processes in RNA-like replicator systems. In particular, we consider the following processes: the maintenance of information by replicators with and without interactions, the acquisition of information by replicators having a complex genotype-phenotype map, the generation of information by replicators having a complex genotype-phenotype-interaction map, and the storage of information by replicators serving as dedicated templates. Focusing on these informatic aspects, we review studies on quasi-species, error threshold, RNA-folding genotype-phenotype map, hypercycle, multilevel selection (including spatial self-organization, classical group selection, and compartmentalization), and the origin of DNA-like replicators. In conclusion, we pose a future question for theoretical studies on the origin of life.

  6. INTEGRATION OF SYSTEMS GLYCOBIOLOGY WITH BIOINFORMATICS TOOLBOXES, GLYCOINFORMATICS RESOURCES AND GLYCOPROTEOMICS DATA

    PubMed Central

    Liu, Gang; Neelamegham, Sriram

    2015-01-01

    The glycome constitutes the entire complement of free carbohydrates and glycoconjugates expressed on whole cells or tissues. ‘Systems Glycobiology’ is an emerging discipline that aims to quantitatively describe and analyse the glycome. Here, instead of developing a detailed understanding of single biochemical processes, a combination of computational and experimental tools are used to seek an integrated or ‘systems-level’ view. This can explain how multiple biochemical reactions and transport processes interact with each other to control glycome biosynthesis and function. Computational methods in this field commonly build in silico reaction network models to describe experimental data derived from structural studies that measure cell-surface glycan distribution. While considerable progress has been made, several challenges remain due to the complex and heterogeneous nature of this post-translational modification. First, for the in silico models to be standardized and shared among laboratories, it is necessary to integrate glycan structure information and glycosylation-related enzyme definitions into the mathematical models. Second, as glycoinformatics resources grow, it would be attractive to utilize ‘Big Data’ stored in these repositories for model construction and validation. Third, while the technology for profiling the glycome at the whole-cell level has been standardized, there is a need to integrate mass spectrometry derived site-specific glycosylation data into the models. The current review discusses progress that is being made to resolve the above bottlenecks. The focus is on how computational models can bridge the gap between ‘data’ generated in wet-laboratory studies with ‘knowledge’ that can enhance our understanding of the glycome. PMID:25871730

  7. Integration of systems glycobiology with bioinformatics toolboxes, glycoinformatics resources, and glycoproteomics data.

    PubMed

    Liu, Gang; Neelamegham, Sriram

    2015-01-01

    The glycome constitutes the entire complement of free carbohydrates and glycoconjugates expressed on whole cells or tissues. 'Systems Glycobiology' is an emerging discipline that aims to quantitatively describe and analyse the glycome. Here, instead of developing a detailed understanding of single biochemical processes, a combination of computational and experimental tools are used to seek an integrated or 'systems-level' view. This can explain how multiple biochemical reactions and transport processes interact with each other to control glycome biosynthesis and function. Computational methods in this field commonly build in silico reaction network models to describe experimental data derived from structural studies that measure cell-surface glycan distribution. While considerable progress has been made, several challenges remain due to the complex and heterogeneous nature of this post-translational modification. First, for the in silico models to be standardized and shared among laboratories, it is necessary to integrate glycan structure information and glycosylation-related enzyme definitions into the mathematical models. Second, as glycoinformatics resources grow, it would be attractive to utilize 'Big Data' stored in these repositories for model construction and validation. Third, while the technology for profiling the glycome at the whole-cell level has been standardized, there is a need to integrate mass spectrometry derived site-specific glycosylation data into the models. The current review discusses progress that is being made to resolve the above bottlenecks. The focus is on how computational models can bridge the gap between 'data' generated in wet-laboratory studies with 'knowledge' that can enhance our understanding of the glycome.

  8. The ClimaGrowing Footprint of Climate Change: Can Systems Built Today Cope with Tomorrow's Weather Extremes?

    SciTech Connect

    Kintner-Meyer, Michael CW; Kraucunas, Ian P.

    2013-07-11

    This article describes how current climate conditions--with increasingly extreme storms, droughts, and heat waves and their ensuing effects on water quality and levels--are adding stress to an already aging power grid. Moreover, it explains how evaluations of said grid, built upon past weather patterns, are inaqeduate for measuring if the nation's energy systems can cope with future climate changes. The authors make the case for investing in the development of robust, integrated electricity planning tools that account for these climate change factors as a means for enhancing electricity infrastructure resilience.

  9. Computational Systems Bioinformatics and Bioimaging for Pathway Analysis and Drug Screening

    PubMed Central

    Zhou, Xiaobo; Wong, Stephen T. C.

    2009-01-01

    The premise of today’s drug development is that the mechanism of a disease is highly dependent upon underlying signaling and cellular pathways. Such pathways are often composed of complexes of physically interacting genes, proteins, or biochemical activities coordinated by metabolic intermediates, ions, and other small solutes and are investigated with molecular biology approaches in genomics, proteomics, and metabonomics. Nevertheless, the recent declines in the pharmaceutical industry’s revenues indicate such approaches alone may not be adequate in creating successful new drugs. Our observation is that combining methods of genomics, proteomics, and metabonomics with techniques of bioimaging will systematically provide powerful means to decode or better understand molecular interactions and pathways that lead to disease and potentially generate new insights and indications for drug targets. The former methods provide the profiles of genes, proteins, and metabolites, whereas the latter techniques generate objective, quantitative phenotypes correlating to the molecular profiles and interactions. In this paper, we describe pathway reconstruction and target validation based on the proposed systems biologic approach and show selected application examples for pathway analysis and drug screening. PMID:20011613

  10. Humidity compensation of bad-smell sensing system using a detector tube and a built-in camera

    NASA Astrophysics Data System (ADS)

    Hirano, Hiroyuki; Nakamoto, Takamichi

    2011-09-01

    We developed a low-cost sensing system robust against humidity change for detecting and estimating concentration of bad smell, such as hydrogen sulfide and ammonia. In the previous study, we developed automated measurement system for a gas detector tube using a built-in camera instead of the conventional manual inspection of the gas detector tube. Concentration detectable by the developed system ranges from a few tens of ppb to a few tens of ppm. However, we previously found that the estimated concentration depends not only on actual concentration, but on humidity. Here, we established the method to correct the influence of humidity by creating regression function with its inputs of discoloration rate and humidity. We studied 2 methods (Backpropagation, Radial basis function network) to get regression function and evaluated them. Consequently, the system successfully estimated the concentration on a practical level even when humidity changes.

  11. Dynamics of Complex Systems Built as Coupled Physical, Communication and Decision Layers.

    PubMed

    Kühnlenz, Florian; Nardelli, Pedro H J

    2016-01-01

    This paper proposes a simple model to capture the complexity of multilayer systems where their constituent layers affect, and are affected by, each other. The physical layer is a circuit composed by a power source and resistors in parallel. Every individual agent aims at maximizing its own delivered power by adding, removing or keeping the resistors it has; the delivered power is in turn a non-linear function that depends on the other agents' behavior, its own internal state, its global state perception, the information received from its neighbors via the communication network and a randomized selfishness. We develop an agent-based simulation to analyze the effects of number of agents (system size), communication network topology, communication errors and the minimum power gain that triggers a behavioral change on the system dynamic. Our results show that a wave-like behavior at macro-level (caused by individual changes in the decision layer) can only emerge for a specific system size. The ratio between cooperators and defectors depends on the minimum gain assumed-lower minimal gains lead to less cooperation, and vice-versa. Different communication network topologies imply different levels of power utilization and fairness at the physical layer, and a certain level of error in the communication layer induces more cooperation. PMID:26730590

  12. Dynamics of Complex Systems Built as Coupled Physical, Communication and Decision Layers.

    PubMed

    Kühnlenz, Florian; Nardelli, Pedro H J

    2016-01-01

    This paper proposes a simple model to capture the complexity of multilayer systems where their constituent layers affect, and are affected by, each other. The physical layer is a circuit composed by a power source and resistors in parallel. Every individual agent aims at maximizing its own delivered power by adding, removing or keeping the resistors it has; the delivered power is in turn a non-linear function that depends on the other agents' behavior, its own internal state, its global state perception, the information received from its neighbors via the communication network and a randomized selfishness. We develop an agent-based simulation to analyze the effects of number of agents (system size), communication network topology, communication errors and the minimum power gain that triggers a behavioral change on the system dynamic. Our results show that a wave-like behavior at macro-level (caused by individual changes in the decision layer) can only emerge for a specific system size. The ratio between cooperators and defectors depends on the minimum gain assumed-lower minimal gains lead to less cooperation, and vice-versa. Different communication network topologies imply different levels of power utilization and fairness at the physical layer, and a certain level of error in the communication layer induces more cooperation.

  13. Dynamics of Complex Systems Built as Coupled Physical, Communication and Decision Layers

    PubMed Central

    Kühnlenz, Florian; Nardelli, Pedro H. J.

    2016-01-01

    This paper proposes a simple model to capture the complexity of multilayer systems where their constituent layers affect, and are affected by, each other. The physical layer is a circuit composed by a power source and resistors in parallel. Every individual agent aims at maximizing its own delivered power by adding, removing or keeping the resistors it has; the delivered power is in turn a non-linear function that depends on the other agents’ behavior, its own internal state, its global state perception, the information received from its neighbors via the communication network and a randomized selfishness. We develop an agent-based simulation to analyze the effects of number of agents (system size), communication network topology, communication errors and the minimum power gain that triggers a behavioral change on the system dynamic. Our results show that a wave-like behavior at macro-level (caused by individual changes in the decision layer) can only emerge for a specific system size. The ratio between cooperators and defectors depends on the minimum gain assumed—lower minimal gains lead to less cooperation, and vice-versa. Different communication network topologies imply different levels of power utilization and fairness at the physical layer, and a certain level of error in the communication layer induces more cooperation. PMID:26730590

  14. Teaching Folder Management System for the Enhancement of Engineering and Built Environment Faculty Program

    ERIC Educational Resources Information Center

    Ab-Rahman, Mohammad Syuhaimi; Mustaffa, Muhamad Azrin Mohd; Abdul, Nasrul Amir; Yusoff, Abdul Rahman Mohd; Hipni, Afiq

    2015-01-01

    A strong, systematic and well-executed management system will be able to minimize and coordinate workload. A number of committees need to be developed, which are joined by the department staffs to achieve the objectives that have been set. Another important aspect is the monitoring department in order to ensure that the work done is correct and in…

  15. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Pu; Bennett, Christopher H.; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-09-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations.

  16. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses

    PubMed Central

    Lin, Yu-Pu; Bennett, Christopher H.; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-01-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations. PMID:27601088

  17. A SPECT system simulator built on the SolidWorksTM 3D-Design package

    PubMed Central

    Li, Xin; Furenlid, Lars R.

    2015-01-01

    We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design workflow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorksTM-created stereolithography (.STL) representations with a full complement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorksTM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system. PMID:26190885

  18. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses.

    PubMed

    Lin, Yu-Pu; Bennett, Christopher H; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-01-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations. PMID:27601088

  19. A SPECT system simulator built on the SolidWorksTM 3D design package

    NASA Astrophysics Data System (ADS)

    Li, Xin; Furenlid, Lars R.

    2014-09-01

    We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design work flow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorksTM-created stereolithography (.STL) representations with a full com- plement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorksTM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system.

  20. Bioinformatics education in India.

    PubMed

    Kulkarni-Kale, Urmila; Sawant, Sangeeta; Chavan, Vishwas

    2010-11-01

    An account of bioinformatics education in India is presented along with future prospects. Establishment of BTIS network by Department of Biotechnology (DBT), Government of India in the 1980s had been a systematic effort in the development of bioinformatics infrastructure in India to provide services to scientific community. Advances in the field of bioinformatics underpinned the need for well-trained professionals with skills in information technology and biotechnology. As a result, programmes for capacity building in terms of human resource development were initiated. Educational programmes gradually evolved from the organisation of short-term workshops to the institution of formal diploma/degree programmes. A case study of the Master's degree course offered at the Bioinformatics Centre, University of Pune is discussed. Currently, many universities and institutes are offering bioinformatics courses at different levels with variations in the course contents and degree of detailing. BioInformatics National Certification (BINC) examination initiated in 2005 by DBT provides a common yardstick to assess the knowledge and skill sets of students passing out of various institutions. The potential for broadening the scope of bioinformatics to transform it into a data intensive discovery discipline is discussed. This necessitates introduction of amendments in the existing curricula to accommodate the upcoming developments.

  1. Dynamical energy analysis for built-up acoustic systems at high frequencies.

    PubMed

    Chappell, D J; Giani, S; Tanner, G

    2011-09-01

    Standard methods for describing the intensity distribution of mechanical and acoustic wave fields in the high frequency asymptotic limit are often based on flow transport equations. Common techniques are statistical energy analysis, employed mostly in the context of vibro-acoustics, and ray tracing, a popular tool in architectural acoustics. Dynamical energy analysis makes it possible to interpolate between standard statistical energy analysis and full ray tracing, containing both of these methods as limiting cases. In this work a version of dynamical energy analysis based on a Chebyshev basis expansion of the Perron-Frobenius operator governing the ray dynamics is introduced. It is shown that the technique can efficiently deal with multi-component systems overcoming typical geometrical limitations present in statistical energy analysis. Results are compared with state-of-the-art hp-adaptive discontinuous Galerkin finite element simulations.

  2. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    PubMed

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2016-03-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations.

  3. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    PubMed

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2015-06-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations.

  4. An arch-shaped intraoral tongue drive system with built-in tongue-computer interfacing SoC.

    PubMed

    Park, Hangue; Ghovanloo, Maysam

    2014-01-01

    We present a new arch-shaped intraoral Tongue Drive System (iTDS) designed to occupy the buccal shelf in the user's mouth. The new arch-shaped iTDS, which will be referred to as the iTDS-2, incorporates a system-on-a-chip (SoC) that amplifies and digitizes the raw magnetic sensor data and sends it wirelessly to an external TDS universal interface (TDS-UI) via an inductive coil or a planar inverted-F antenna. A built-in transmitter (Tx) employs a dual-band radio that operates at either 27 MHz or 432 MHz band, according to the wireless link quality. A built-in super-regenerative receiver (SR-Rx) monitors the wireless link quality and switches the band if the link quality is below a predetermined threshold. An accompanying ultra-low power FPGA generates data packets for the Tx and handles digital control functions. The custom-designed TDS-UI receives raw magnetic sensor data from the iTDS-2, recognizes the intended user commands by the sensor signal processing (SSP) algorithm running in a smartphone, and delivers the classified commands to the target devices, such as a personal computer or a powered wheelchair. We evaluated the iTDS-2 prototype using center-out and maze navigation tasks on two human subjects, which proved its functionality. The subjects' performance with the iTDS-2 was improved by 22% over its predecessor, reported in our earlier publication. PMID:25405513

  5. An Arch-Shaped Intraoral Tongue Drive System with Built-in Tongue-Computer Interfacing SoC

    PubMed Central

    Park, Hangue; Ghovanloo, Maysam

    2014-01-01

    We present a new arch-shaped intraoral Tongue Drive System (iTDS) designed to occupy the buccal shelf in the user's mouth. The new arch-shaped iTDS, which will be referred to as the iTDS-2, incorporates a system-on-a-chip (SoC) that amplifies and digitizes the raw magnetic sensor data and sends it wirelessly to an external TDS universal interface (TDS-UI) via an inductive coil or a planar inverted-F antenna. A built-in transmitter (Tx) employs a dual-band radio that operates at either 27 MHz or 432 MHz band, according to the wireless link quality. A built-in super-regenerative receiver (SR-Rx) monitors the wireless link quality and switches the band if the link quality is below a predetermined threshold. An accompanying ultra-low power FPGA generates data packets for the Tx and handles digital control functions. The custom-designed TDS-UI receives raw magnetic sensor data from the iTDS-2, recognizes the intended user commands by the sensor signal processing (SSP) algorithm running in a smartphone, and delivers the classified commands to the target devices, such as a personal computer or a powered wheelchair. We evaluated the iTDS-2 prototype using center-out and maze navigation tasks on two human subjects, which proved its functionality. The subjects' performance with the iTDS-2 was improved by 22% over its predecessor, reported in our earlier publication. PMID:25405513

  6. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  7. 2K09 and thereafter : the coming era of integrative bioinformatics, systems biology and intelligent computing for functional genomics and personalized medicine research.

    PubMed

    Yang, Jack Y; Niemierko, Andrzej; Bajcsy, Ruzena; Xu, Dong; Athey, Brian D; Zhang, Aidong; Ersoy, Okan K; Li, Guo-Zheng; Borodovsky, Mark; Zhang, Joe C; Arabnia, Hamid R; Deng, Youping; Dunker, A Keith; Liu, Yunlong; Ghafoor, Arif

    2010-12-01

    Significant interest exists in establishing synergistic research in bioinformatics, systems biology and intelligent computing. Supported by the United States National Science Foundation (NSF), International Society of Intelligent Biological Medicine (http://www.ISIBM.org), International Journal of Computational Biology and Drug Design (IJCBDD) and International Journal of Functional Informatics and Personalized Medicine, the ISIBM International Joint Conferences on Bioinformatics, Systems Biology and Intelligent Computing (ISIBM IJCBS 2009) attracted more than 300 papers and 400 researchers and medical doctors world-wide. It was the only inter/multidisciplinary conference aimed to promote synergistic research and education in bioinformatics, systems biology and intelligent computing. The conference committee was very grateful for the valuable advice and suggestions from honorary chairs, steering committee members and scientific leaders including Dr. Michael S. Waterman (USC, Member of United States National Academy of Sciences), Dr. Chih-Ming Ho (UCLA, Member of United States National Academy of Engineering and Academician of Academia Sinica), Dr. Wing H. Wong (Stanford, Member of United States National Academy of Sciences), Dr. Ruzena Bajcsy (UC Berkeley, Member of United States National Academy of Engineering and Member of United States Institute of Medicine of the National Academies), Dr. Mary Qu Yang (United States National Institutes of Health and Oak Ridge, DOE), Dr. Andrzej Niemierko (Harvard), Dr. A. Keith Dunker (Indiana), Dr. Brian D. Athey (Michigan), Dr. Weida Tong (FDA, United States Department of Health and Human Services), Dr. Cathy H. Wu (Georgetown), Dr. Dong Xu (Missouri), Drs. Arif Ghafoor and Okan K Ersoy (Purdue), Dr. Mark Borodovsky (Georgia Tech, President of ISIBM), Dr. Hamid R. Arabnia (UGA, Vice-President of ISIBM), and other scientific leaders. The committee presented the 2009 ISIBM Outstanding Achievement Awards to Dr. Joydeep Ghosh (UT

  8. 2K09 and thereafter : the coming era of integrative bioinformatics, systems biology and intelligent computing for functional genomics and personalized medicine research

    PubMed Central

    2010-01-01

    Significant interest exists in establishing synergistic research in bioinformatics, systems biology and intelligent computing. Supported by the United States National Science Foundation (NSF), International Society of Intelligent Biological Medicine (http://www.ISIBM.org), International Journal of Computational Biology and Drug Design (IJCBDD) and International Journal of Functional Informatics and Personalized Medicine, the ISIBM International Joint Conferences on Bioinformatics, Systems Biology and Intelligent Computing (ISIBM IJCBS 2009) attracted more than 300 papers and 400 researchers and medical doctors world-wide. It was the only inter/multidisciplinary conference aimed to promote synergistic research and education in bioinformatics, systems biology and intelligent computing. The conference committee was very grateful for the valuable advice and suggestions from honorary chairs, steering committee members and scientific leaders including Dr. Michael S. Waterman (USC, Member of United States National Academy of Sciences), Dr. Chih-Ming Ho (UCLA, Member of United States National Academy of Engineering and Academician of Academia Sinica), Dr. Wing H. Wong (Stanford, Member of United States National Academy of Sciences), Dr. Ruzena Bajcsy (UC Berkeley, Member of United States National Academy of Engineering and Member of United States Institute of Medicine of the National Academies), Dr. Mary Qu Yang (United States National Institutes of Health and Oak Ridge, DOE), Dr. Andrzej Niemierko (Harvard), Dr. A. Keith Dunker (Indiana), Dr. Brian D. Athey (Michigan), Dr. Weida Tong (FDA, United States Department of Health and Human Services), Dr. Cathy H. Wu (Georgetown), Dr. Dong Xu (Missouri), Drs. Arif Ghafoor and Okan K Ersoy (Purdue), Dr. Mark Borodovsky (Georgia Tech, President of ISIBM), Dr. Hamid R. Arabnia (UGA, Vice-President of ISIBM), and other scientific leaders. The committee presented the 2009 ISIBM Outstanding Achievement Awards to Dr. Joydeep Ghosh (UT

  9. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    PubMed Central

    Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the potential advancement of research and development in complex biomedical systems has created a need for an educated workforce in bioinformatics. However, effectively integrating bioinformatics education through formal and informal educational settings has been a challenge due in part to its cross-disciplinary nature. In this article, we seek to provide an overview of the state of bioinformatics education. This article identifies: 1) current approaches of bioinformatics education at the undergraduate and graduate levels; 2) the most common concepts and skills being taught in bioinformatics education; 3) pedagogical approaches and methods of delivery for conveying bioinformatics concepts and skills; and 4) assessment results on the impact of these programs, approaches, and methods in students’ attitudes or learning. Based on these findings, it is our goal to describe the landscape of scholarly work in this area and, as a result, identify opportunities and challenges in bioinformatics education. PMID:25452484

  10. A survey of scholarly literature describing the field of bioinformatics education and bioinformatics educational research.

    PubMed

    Magana, Alejandra J; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the potential advancement of research and development in complex biomedical systems has created a need for an educated workforce in bioinformatics. However, effectively integrating bioinformatics education through formal and informal educational settings has been a challenge due in part to its cross-disciplinary nature. In this article, we seek to provide an overview of the state of bioinformatics education. This article identifies: 1) current approaches of bioinformatics education at the undergraduate and graduate levels; 2) the most common concepts and skills being taught in bioinformatics education; 3) pedagogical approaches and methods of delivery for conveying bioinformatics concepts and skills; and 4) assessment results on the impact of these programs, approaches, and methods in students' attitudes or learning. Based on these findings, it is our goal to describe the landscape of scholarly work in this area and, as a result, identify opportunities and challenges in bioinformatics education. PMID:25452484

  11. Microbial bioinformatics 2020.

    PubMed

    Pallen, Mark J

    2016-09-01

    Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! PMID:27471065

  12. String Mining in Bioinformatics

    NASA Astrophysics Data System (ADS)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word “data-mining” is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  13. String Mining in Bioinformatics

    NASA Astrophysics Data System (ADS)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  14. The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children

    PubMed Central

    Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Aslı; Sancar, Burcu

    2013-01-01

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, that have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children’s gestures. Although co-speech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language. PMID:25663828

  15. An Online Bioinformatics Curriculum

    PubMed Central

    Searls, David B.

    2012-01-01

    Online learning initiatives over the past decade have become increasingly comprehensive in their selection of courses and sophisticated in their presentation, culminating in the recent announcement of a number of consortium and startup activities that promise to make a university education on the internet, free of charge, a real possibility. At this pivotal moment it is appropriate to explore the potential for obtaining comprehensive bioinformatics training with currently existing free video resources. This article presents such a bioinformatics curriculum in the form of a virtual course catalog, together with editorial commentary, and an assessment of strengths, weaknesses, and likely future directions for open online learning in this field. PMID:23028269

  16. Bioinformatics software resources.

    PubMed

    Gilbert, Don

    2004-09-01

    This review looks at internet archives, repositories and lists for obtaining popular and useful biology and bioinformatics software. Resources include collections of free software, services for the collaborative development of new programs, software news media and catalogues of links to bioinformatics software and web tools. Problems with such resources arise from needs for continued curator effort to collect and update these, combined with less than optimal community support, funding and collaboration. Despite some problems, the available software repositories provide needed public access to many tools that are a foundation for analyses in bioscience research efforts.

  17. Estimation of Vibrational Power in Built-Up Systems Involving Box-Like Structures, Part 1: Uniform Force Distribution

    NASA Astrophysics Data System (ADS)

    FULFORD, R. A.; PETERSSON, B. A. T.

    2000-05-01

    For the vibration analysis of built-up structures traditional point-like connections cannot be applied where the interface is large and the wavelength is small. In these situations the spatially distributed wavefield has to be accounted for, whereby the field properties associated with the interface (i.e., velocity, force) have to be considered to be continuous over a surface or, for a one-dimensional contact, along a line. Due to the perceived complexity of these distributions it is most common for analyses to employ a numerical technique which, whilst efficient as a methodology, is limited in that little is revealed about the physics of the system. The solutions can therefore be rather esoteric and in conjunction with design this makes the techniques cumbersome to use. As a move towards overcoming the problem the work presented considers a simplified analytical approach from which a model of a box-like structure is obtained. The basis of the approach is to consider the spatial properties of distributed forces in terms of their Fourier components and then hypothesize that the zero order, i.e., the uniform component, is dominant. In this way, the true spatial characteristics of the forces are retained but in a reduced and elementary form. This greatly simplifies the modelling. For the box-like structure, supported by an infinite plate-like recipient, a prediction of the vibratory power is considered and qualifying results established.

  18. Efficient azo dye decolorization in a continuous stirred tank reactor (CSTR) with built-in bioelectrochemical system.

    PubMed

    Cui, Min-Hua; Cui, Dan; Gao, Lei; Cheng, Hao-Yi; Wang, Ai-Jie

    2016-10-01

    A continuous stirred tank reactor with built-in bioelectrochemical system (CSTR-BES) was developed for azo dye Alizarin Yellow R (AYR) containing wastewater treatment. The decolorization efficiency (DE) of the CSTR-BES was 97.04±0.06% for 7h with sludge concentration of 3000mg/L and initial AYR concentration of 100mg/L, which was superior to that of the sole CSTR mode (open circuit: 54.87±4.34%) and the sole BES mode (without sludge addition: 91.37±0.44%). The effects of sludge concentration and sodium acetate (NaAc) concentration on azo dye decolorization were investigated. The highest DE of CSTR-BES for 4h was 87.66±2.93% with sludge concentration of 12,000mg/L, NaAc concentration of 2000mg/L and initial AYR concentration of 100mg/L. The results in this study indicated that CSTR-BES could be a practical strategy for upgrading conventional anaerobic facilities against refractory wastewater treatment.

  19. Bioinformatics and School Biology

    ERIC Educational Resources Information Center

    Dalpech, Roger

    2006-01-01

    The rapidly changing field of bioinformatics is fuelling the need for suitably trained personnel with skills in relevant biological "sub-disciplines" such as proteomics, transcriptomics and metabolomics, etc. But because of the complexity--and sheer weight of data--associated with these new areas of biology, many school teachers feel…

  20. The influence of the built environment on outcomes from a "walking school bus study": a cross-sectional analysis using geographical information systems.

    PubMed

    Oreskovic, Nicolas M; Blossom, Jeff; Robinson, Alyssa I; Chen, Minghua L; Uscanga, Doris K; Mendoza, Jason A

    2014-11-01

    Active commuting to school increases children's daily physical activity. The built environment is associated with children's physical activity levels in cross-sectional studies. This study examined the role of the built environment on the outcomes of a "walking school bus" study. Geographical information systems was used to map out and compare the built environments around schools participating in a pilot walking school bus randomised controlled trial, as well as along school routes. Multi-level modelling was used to determine the built environment attributes associated with the outcomes of active commuting to school and accelerometer-determined moderate-to-vigorous physical activity (MPVA). There were no differences in the surrounding built environments of control (n = 4) and intervention (n = 4) schools participating in the walking school bus study. Among school walking routes, park space was inversely associated with active commuting to school (β = -0.008, SE = 0.004, P = 0.03), while mixed-land use was positively associated with daily MPVA (β = 60.0, SE = 24.3, P = 0.02). There was effect modification such that high traffic volume and high street connectivity were associated with greater moderate-to-vigorous physical activity. The results of this study suggest that the built environment may play a role in active school commuting outcomes and daily physical activity.

  1. Feature selection in bioinformatics

    NASA Astrophysics Data System (ADS)

    Wang, Lipo

    2012-06-01

    In bioinformatics, there are often a large number of input features. For example, there are millions of single nucleotide polymorphisms (SNPs) that are genetic variations which determine the dierence between any two unrelated individuals. In microarrays, thousands of genes can be proled in each test. It is important to nd out which input features (e.g., SNPs or genes) are useful in classication of a certain group of people or diagnosis of a given disease. In this paper, we investigate some powerful feature selection techniques and apply them to problems in bioinformatics. We are able to identify a very small number of input features sucient for tasks at hand and we demonstrate this with some real-world data.

  2. Forensic DNA and bioinformatics.

    PubMed

    Bianchi, Lucia; Liò, Pietro

    2007-03-01

    The field of forensic science is increasingly based on biomolecular data and many European countries are establishing forensic databases to store DNA profiles of crime scenes of known offenders and apply DNA testing. The field is boosted by statistical and technological advances such as DNA microarray sequencing, TFT biosensors, machine learning algorithms, in particular Bayesian networks, which provide an effective way of evidence organization and inference. The aim of this article is to discuss the state of art potentialities of bioinformatics in forensic DNA science. We also discuss how bioinformatics will address issues related to privacy rights such as those raised from large scale integration of crime, public health and population genetic susceptibility-to-diseases databases.

  3. Neuroinformatics: from bioinformatics to databasing the brain.

    PubMed

    Morse, Thomas M

    2008-01-01

    Neuroinformatics seeks to create and maintain web-accessible databases of experimental and computational data, together with innovative software tools, essential for understanding the nervous system in its normal function and in neurological disorders. Neuroinformatics includes traditional bioinformatics of gene and protein sequences in the brain; atlases of brain anatomy and localization of genes and proteins; imaging of brain cells; brain imaging by positron emission tomography (PET), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), magnetoencephalography (MEG) and other methods; many electrophysiological recording methods; and clinical neurological data, among others. Building neuroinformatics databases and tools presents difficult challenges because they span a wide range of spatial scales and types of data stored and analyzed. Traditional bioinformatics, by comparison, focuses primarily on genomic and proteomic data (which of course also presents difficult challenges). Much of bioinformatics analysis focus on sequences (DNA, RNA, and protein molecules), as the type of data that are stored, compared, and sometimes modeled. Bioinformatics is undergoing explosive growth with the addition, for example, of databases that catalog interactions between proteins, of databases that track the evolution of genes, and of systems biology databases which contain models of all aspects of organisms. This commentary briefly reviews neuroinformatics with clarification of its relationship to traditional and modern bioinformatics.

  4. The potential of translational bioinformatics approaches for pharmacology research.

    PubMed

    Li, Lang

    2015-10-01

    The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of 'omics' to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines.

  5. Computer Simulation of Embryonic Systems: What can a virtual embryo teach us about developmental toxicity? (LA Conference on Computational Biology & Bioinformatics)

    EPA Science Inventory

    This presentation will cover work at EPA under the CSS program for: (1) Virtual Tissue Models built from the known biology of an embryological system and structured to recapitulate key cell signals and responses; (2) running the models with real (in vitro) or synthetic (in silico...

  6. Making sense of genomes of parasitic worms: Tackling bioinformatic challenges.

    PubMed

    Korhonen, Pasi K; Young, Neil D; Gasser, Robin B

    2016-01-01

    Billions of people and animals are infected with parasitic worms (helminths). Many of these worms cause diseases that have a major socioeconomic impact worldwide, and are challenging to control because existing treatment methods are often inadequate. There is, therefore, a need to work toward developing new intervention methods, built on a sound understanding of parasitic worms at molecular level, the relationships that they have with their animal hosts and/or the diseases that they cause. Decoding the genomes and transcriptomes of these parasites brings us a step closer to this goal. The key focus of this article is to critically review and discuss bioinformatic tools used for the assembly and annotation of these genomes and transcriptomes, as well as various post-genomic analyses of transcription profiles, biological pathways, synteny, phylogeny, biogeography and the prediction and prioritisation of drug target candidates. Bioinformatic pipelines implemented and established recently provide practical and efficient tools for the assembly and annotation of genomes of parasitic worms, and will be applicable to a wide range of other parasites and eukaryotic organisms. Future research will need to assess the utility of long-read sequence data sets for enhanced genomic assemblies, and develop improved algorithms for gene prediction and post-genomic analyses, to enable comprehensive systems biology explorations of parasitic organisms.

  7. Phylogenetic trees in bioinformatics

    SciTech Connect

    Burr, Tom L

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  8. Adapting bioinformatics curricula for big data.

    PubMed

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs.

  9. Bioinformatics tools and database resources for systems genetics analysis in mice--a short review and an evaluation of future needs.

    PubMed

    Durrant, Caroline; Swertz, Morris A; Alberts, Rudi; Arends, Danny; Möller, Steffen; Mott, Richard; Prins, Pjotr; van der Velde, K Joeri; Jansen, Ritsert C; Schughart, Klaus

    2012-03-01

    During a meeting of the SYSGENET working group 'Bioinformatics', currently available software tools and databases for systems genetics in mice were reviewed and the needs for future developments discussed. The group evaluated interoperability and performed initial feasibility studies. To aid future compatibility of software and exchange of already developed software modules, a strong recommendation was made by the group to integrate HAPPY and R/qtl analysis toolboxes, GeneNetwork and XGAP database platforms, and TIQS and xQTL processing platforms. R should be used as the principal computer language for QTL data analysis in all platforms and a 'cloud' should be used for software dissemination to the community. Furthermore, the working group recommended that all data models and software source code should be made visible in public repositories to allow a coordinated effort on the use of common data structures and file formats.

  10. Highlighting computations in bioscience and bioinformatics: review of the Symposium of Computations in Bioinformatics and Bioscience (SCBB07)

    PubMed Central

    Lu, Guoqing; Ni, Jun

    2008-01-01

    The Second Symposium on Computations in Bioinformatics and Bioscience (SCBB07) was held in Iowa City, Iowa, USA, on August 13–15, 2007. This annual event attracted dozens of bioinformatics professionals and students, who are interested in solving emerging computational problems in bioscience, from China, Japan, Taiwan and the United States. The Scientific Committee of the symposium selected 18 peer-reviewed papers for publication in this supplemental issue of BMC Bioinformatics. These papers cover a broad spectrum of topics in computational biology and bioinformatics, including DNA, protein and genome sequence analysis, gene expression and microarray analysis, computational proteomics and protein structure classification, systems biology and machine learning. PMID:18541044

  11. Systems architecture: a new model for sustainability and the built environment using nanotechnology, biotechnology, information technology, and cognitive science with living technology.

    PubMed

    Armstrong, Rachel

    2010-01-01

    This report details a workshop held at the Bartlett School of Architecture, University College London, to initiate interdisciplinary collaborations for the practice of systems architecture, which is a new model for the generation of sustainable architecture that combines the discipline of the study of the built environment with the scientific study of complexity, or systems science, and adopts the perspective of systems theory. Systems architecture offers new perspectives on the organization of the built environment that enable architects to consider architecture as a series of interconnected networks with embedded links into natural systems. The public workshop brought together architects and scientists working with the convergence of nanotechnology, biotechnology, information technology, and cognitive science and with living technology to investigate the possibility of a new generation of smart materials that are implied by this approach.

  12. Temporal Patterns in Sheep Fetal Heart Rate Variability Correlate to Systemic Cytokine Inflammatory Response: A Methodological Exploration of Monitoring Potential Using Complex Signals Bioinformatics.

    PubMed

    Herry, Christophe L; Cortes, Marina; Wu, Hau-Tieng; Durosier, Lucien D; Cao, Mingju; Burns, Patrick; Desrochers, André; Fecteau, Gilles; Seely, Andrew J E; Frasch, Martin G

    2016-01-01

    Fetal inflammation is associated with increased risk for postnatal organ injuries. No means of early detection exist. We hypothesized that systemic fetal inflammation leads to distinct alterations of fetal heart rate variability (fHRV). We tested this hypothesis deploying a novel series of approaches from complex signals bioinformatics. In chronically instrumented near-term fetal sheep, we induced an inflammatory response with lipopolysaccharide (LPS) injected intravenously (n = 10) observing it over 54 hours; seven additional fetuses served as controls. Fifty-one fHRV measures were determined continuously every 5 minutes using Continuous Individualized Multi-organ Variability Analysis (CIMVA). CIMVA creates an fHRV measures matrix across five signal-analytical domains, thus describing complementary properties of fHRV. We implemented, validated and tested methodology to obtain a subset of CIMVA fHRV measures that matched best the temporal profile of the inflammatory cytokine IL-6. In the LPS group, IL-6 peaked at 3 hours. For the LPS, but not control group, a sharp increase in standardized difference in variability with respect to baseline levels was observed between 3 h and 6 h abating to baseline levels, thus tracking closely the IL-6 inflammatory profile. We derived fHRV inflammatory index (FII) consisting of 15 fHRV measures reflecting the fetal inflammatory response with prediction accuracy of 90%. Hierarchical clustering validated the selection of 14 out of 15 fHRV measures comprising FII. We developed methodology to identify a distinctive subset of fHRV measures that tracks inflammation over time. The broader potential of this bioinformatics approach is discussed to detect physiological responses encoded in HRV measures. PMID:27100089

  13. Temporal Patterns in Sheep Fetal Heart Rate Variability Correlate to Systemic Cytokine Inflammatory Response: A Methodological Exploration of Monitoring Potential Using Complex Signals Bioinformatics

    PubMed Central

    Wu, Hau-Tieng; Durosier, Lucien D.; Desrochers, André; Fecteau, Gilles; Seely, Andrew J. E.; Frasch, Martin G.

    2016-01-01

    Fetal inflammation is associated with increased risk for postnatal organ injuries. No means of early detection exist. We hypothesized that systemic fetal inflammation leads to distinct alterations of fetal heart rate variability (fHRV). We tested this hypothesis deploying a novel series of approaches from complex signals bioinformatics. In chronically instrumented near-term fetal sheep, we induced an inflammatory response with lipopolysaccharide (LPS) injected intravenously (n = 10) observing it over 54 hours; seven additional fetuses served as controls. Fifty-one fHRV measures were determined continuously every 5 minutes using Continuous Individualized Multi-organ Variability Analysis (CIMVA). CIMVA creates an fHRV measures matrix across five signal-analytical domains, thus describing complementary properties of fHRV. We implemented, validated and tested methodology to obtain a subset of CIMVA fHRV measures that matched best the temporal profile of the inflammatory cytokine IL-6. In the LPS group, IL-6 peaked at 3 hours. For the LPS, but not control group, a sharp increase in standardized difference in variability with respect to baseline levels was observed between 3 h and 6 h abating to baseline levels, thus tracking closely the IL-6 inflammatory profile. We derived fHRV inflammatory index (FII) consisting of 15 fHRV measures reflecting the fetal inflammatory response with prediction accuracy of 90%. Hierarchical clustering validated the selection of 14 out of 15 fHRV measures comprising FII. We developed methodology to identify a distinctive subset of fHRV measures that tracks inflammation over time. The broader potential of this bioinformatics approach is discussed to detect physiological responses encoded in HRV measures. PMID:27100089

  14. Molecular characterization and bioinformatics analysis of Ncoa7B, a novel ovulation-associated and reproduction system-specific Ncoa7 isoform.

    PubMed

    Shkolnik, Ketty; Ben-Dor, Shifra; Galiani, Dalia; Hourvitz, Ariel; Dekel, Nava

    2008-03-01

    In the present work, we employed bioinformatics search tools to select ovulation-associated cDNA clones with a preference for those representing putative novel genes. Detailed characterization of one of these transcripts, 6C3, by real-time PCR and RACE analyses led to identification of a novel ovulation-associated gene, designated Ncoa7B. This gene was found to exhibit a significant homology to the Ncoa7 gene that encodes a conserved tissue-specific nuclear receptor coactivator. Unlike Ncoa7, Ncoa7B possesses a unique and highly conserved exon at the 5' end and encodes a protein with a unique N-terminal sequence. Extensive bioinformatics analysis has revealed that Ncoa7B has one identifiable domain, TLDc, which has recently been suggested to be involved in protection from oxidative DNA damage. An alignment of TLDc domain containing proteins was performed, and the closest relative identified was OXR1, which also has a corresponding, highly related short isoform, with just a TLDc domain. Moreover, Ncoa7B expression, as seen to date, seems to be restricted to mammals, while other TLDc family members have no such restriction. Multiple tissue analysis revealed that unlike Ncoa7, which was abundant in a variety of tissues with the highest expression in the brain, Ncoa7B mRNA expression is restricted to the reproductive system organs, particularly the uterus and the ovary. The ovarian expression of Ncoa7B was stimulated by human chorionic gonadotropin. Additionally, using real-time PCR, we demonstrated the involvement of multiple signaling pathways for Ncoa7B expression on preovulatory follicles. PMID:18299425

  15. Bioinformatics and cancer: an essential alliance.

    PubMed

    Dopazo, Joaquín

    2006-06-01

    Modern research in cancer has been revolutionized by the introduction of new high-throughput methodologies such as DNA microarrays. Keeping the pace with these technologies, the bioinformatics offer new solutions for data analysis and, what is more important, it permits to formulate a new class of hypothesis inspired in systems biology, more oriented to blocks of functionally-related genes. Although software implementations for this new methodologies is new there are some options already available. Bioinformatic solutions for other high-throughput techniques such as array-CGH of large-scale genotyping is also revised.

  16. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  17. Bioinformatics of prokaryotic RNAs.

    PubMed

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes.

  18. Bioinformatics of prokaryotic RNAs

    PubMed Central

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes. PMID:24755880

  19. Bioinformatics of prokaryotic RNAs.

    PubMed

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes. PMID:24755880

  20. Development of a 3D Underground Cadastral System with Indoor Mapping for As-Built BIM: The Case Study of Gangnam Subway Station in Korea

    PubMed Central

    Kim, Sangmin; Kim, Jeonghyun; Jung, Jaehoon; Heo, Joon

    2015-01-01

    The cadastral system provides land ownership information by registering and representing land boundaries on a map. The current cadastral system in Korea, however, focuses mainly on the management of 2D land-surface boundaries. It is not yet possible to provide efficient or reliable land administration, as this 2D system cannot support or manage land information on 3D properties (including architectures and civil infrastructures) for both above-ground and underground facilities. A geometrical model of the 3D parcel, therefore, is required for registration of 3D properties. This paper, considering the role of the cadastral system, proposes a framework for a 3D underground cadastral system that can register various types of 3D underground properties using indoor mapping for as-built Building Information Modeling (BIM). The implementation consists of four phases: (1) geometric modeling of a real underground infrastructure using terrestrial laser scanning data; (2) implementation of as-built BIM based on geometric modeling results; (3) accuracy assessment for created as-built BIM using reference points acquired by total station; and (4) creation of three types of 3D underground cadastral map to represent underground properties. The experimental results, based on indoor mapping for as-built BIM, show that the proposed framework for a 3D underground cadastral system is able to register the rights, responsibilities, and restrictions corresponding to the 3D underground properties. In this way, clearly identifying the underground physical situation enables more reliable and effective decision-making in all aspects of the national land administration system. PMID:26690174

  1. Development of a 3D Underground Cadastral System with Indoor Mapping for As-Built BIM: The Case Study of Gangnam Subway Station in Korea.

    PubMed

    Kim, Sangmin; Kim, Jeonghyun; Jung, Jaehoon; Heo, Joon

    2015-01-01

    The cadastral system provides land ownership information by registering and representing land boundaries on a map. The current cadastral system in Korea, however, focuses mainly on the management of 2D land-surface boundaries. It is not yet possible to provide efficient or reliable land administration, as this 2D system cannot support or manage land information on 3D properties (including architectures and civil infrastructures) for both above-ground and underground facilities. A geometrical model of the 3D parcel, therefore, is required for registration of 3D properties. This paper, considering the role of the cadastral system, proposes a framework for a 3D underground cadastral system that can register various types of 3D underground properties using indoor mapping for as-built Building Information Modeling (BIM). The implementation consists of four phases: (1) geometric modeling of a real underground infrastructure using terrestrial laser scanning data; (2) implementation of as-built BIM based on geometric modeling results; (3) accuracy assessment for created as-built BIM using reference points acquired by total station; and (4) creation of three types of 3D underground cadastral map to represent underground properties. The experimental results, based on indoor mapping for as-built BIM, show that the proposed framework for a 3D underground cadastral system is able to register the rights, responsibilities, and restrictions corresponding to the 3D underground properties. In this way, clearly identifying the underground physical situation enables more reliable and effective decision-making in all aspects of the national land administration system.

  2. The GMOD Drupal Bioinformatic Server Framework

    PubMed Central

    Papanicolaou, Alexie; Heckel, David G.

    2010-01-01

    Motivation: Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). Results: We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Conclusion: Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Availability and implementation: Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com Contact: alexie@butterflybase.org PMID:20971988

  3. Bioinformatics-Aided Venomics

    PubMed Central

    Kaas, Quentin; Craik, David J.

    2015-01-01

    Venomics is a modern approach that combines transcriptomics and proteomics to explore the toxin content of venoms. This review will give an overview of computational approaches that have been created to classify and consolidate venomics data, as well as algorithms that have helped discovery and analysis of toxin nucleic acid and protein sequences, toxin three-dimensional structures and toxin functions. Bioinformatics is used to tackle specific challenges associated with the identification and annotations of toxins. Recognizing toxin transcript sequences among second generation sequencing data cannot rely only on basic sequence similarity because toxins are highly divergent. Mass spectrometry sequencing of mature toxins is challenging because toxins can display a large number of post-translational modifications. Identifying the mature toxin region in toxin precursor sequences requires the prediction of the cleavage sites of proprotein convertases, most of which are unknown or not well characterized. Tracing the evolutionary relationships between toxins should consider specific mechanisms of rapid evolution as well as interactions between predatory animals and prey. Rapidly determining the activity of toxins is the main bottleneck in venomics discovery, but some recent bioinformatics and molecular modeling approaches give hope that accurate predictions of toxin specificity could be made in the near future. PMID:26110505

  4. Virtual Bioinformatics Distance Learning Suite

    ERIC Educational Resources Information Center

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  5. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  6. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653

  7. Channelrhodopsins: a bioinformatics perspective.

    PubMed

    Del Val, Coral; Royuela-Flor, José; Milenkovic, Stefan; Bondar, Ana-Nicoleta

    2014-05-01

    Channelrhodopsins are microbial-type rhodopsins that function as light-gated cation channels. Understanding how the detailed architecture of the protein governs its dynamics and specificity for ions is important, because it has the potential to assist in designing site-directed channelrhodopsin mutants for specific neurobiology applications. Here we use bioinformatics methods to derive accurate alignments of channelrhodopsin sequences, assess the sequence conservation patterns and find conserved motifs in channelrhodopsins, and use homology modeling to construct three-dimensional structural models of channelrhodopsins. The analyses reveal that helices C and D of channelrhodopsins contain Cys, Ser, and Thr groups that can engage in both intra- and inter-helical hydrogen bonds. We propose that these polar groups participate in inter-helical hydrogen-bonding clusters important for the protein conformational dynamics and for the local water interactions. This article is part of a Special Issue entitled: Retinal Proteins - You can teach an old dog new tricks. PMID:24252597

  8. ncRDeathDB: A comprehensive bioinformatics resource for deciphering network organization of the ncRNA-mediated cell death system.

    PubMed

    Wu, Deng; Huang, Yan; Kang, Juanjuan; Li, Kongning; Bi, Xiaoman; Zhang, Ting; Jin, Nana; Hu, Yongfei; Tan, Puwen; Zhang, Lu; Yi, Ying; Shen, Wenjun; Huang, Jian; Li, Xiaobo; Li, Xia; Xu, Jianzhen; Wang, Dong

    2015-01-01

    Programmed cell death (PCD) is a critical biological process involved in many important processes, and defects in PCD have been linked with numerous human diseases. In recent years, the protein architecture in different PCD subroutines has been explored, but our understanding of the global network organization of the noncoding RNA (ncRNA)-mediated cell death system is limited and ambiguous. Hence, we developed the comprehensive bioinformatics resource (ncRDeathDB, www.rna-society.org/ncrdeathdb ) to archive ncRNA-associated cell death interactions. The current version of ncRDeathDB documents a total of more than 4600 ncRNA-mediated PCD entries in 12 species. ncRDeathDB provides a user-friendly interface to query, browse and manipulate these ncRNA-associated cell death interactions. Furthermore, this resource will help to visualize and navigate current knowledge of the noncoding RNA component of cell death and autophagy, to uncover the generic organizing principles of ncRNA-associated cell death systems, and to generate valuable biological hypotheses. PMID:26431463

  9. Fuzzy interval Finite Element/Statistical Energy Analysis for mid-frequency analysis of built-up systems with mixed fuzzy and interval parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2016-10-01

    This paper introduces mixed fuzzy and interval parametric uncertainties into the FE components of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model for mid-frequency analysis of built-up systems, thus an uncertain ensemble combining non-parametric with mixed fuzzy and interval parametric uncertainties comes into being. A fuzzy interval Finite Element/Statistical Energy Analysis (FIFE/SEA) framework is proposed to obtain the uncertain responses of built-up systems, which are described as intervals with fuzzy bounds, termed as fuzzy-bounded intervals (FBIs) in this paper. Based on the level-cut technique, a first-order fuzzy interval perturbation FE/SEA (FFIPFE/SEA) and a second-order fuzzy interval perturbation FE/SEA method (SFIPFE/SEA) are developed to handle the mixed parametric uncertainties efficiently. FFIPFE/SEA approximates the response functions by the first-order Taylor series, while SFIPFE/SEA improves the accuracy by considering the second-order items of Taylor series, in which all the mixed second-order items are neglected. To further improve the accuracy, a Chebyshev fuzzy interval method (CFIM) is proposed, in which the Chebyshev polynomials is used to approximate the response functions. The FBIs are eventually reconstructed by assembling the extrema solutions at all cut levels. Numerical results on two built-up systems verify the effectiveness of the proposed methods.

  10. Bioinformatics and Moonlighting Proteins.

    PubMed

    Hernández, Sergio; Franco, Luís; Calvo, Alejandra; Ferragut, Gabriela; Hermoso, Antoni; Amela, Isaac; Gómez, Antonio; Querol, Enrique; Cedano, Juan

    2015-01-01

    Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyze and describe several approaches that use sequences, structures, interactomics, and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are (a) remote homology searches using Psi-Blast, (b) detection of functional motifs and domains, (c) analysis of data from protein-protein interaction databases (PPIs), (d) match the query protein sequence to 3D databases (i.e., algorithms as PISITE), and (e) mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs) has the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations - it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/), previously published by our group, has been used as a benchmark for the all of the analyses. PMID:26157797

  11. Virtual bioinformatics distance learning suite*.

    PubMed

    Tolvanen, Martti; Vihinen, Mauno

    2004-05-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material over the Internet. Currently, we provide two fully computer-based courses, "Introduction to Bioinformatics" and "Bioinformatics in Functional Genomics." Here we will discuss the application of distance learning in bioinformatics training and our experiences gained during the 3 years that we have run the courses, with about 400 students from a number of universities. The courses are available at bioinf.uta.fi.

  12. Combining chemoinformatics with bioinformatics: in silico prediction of bacterial flavor-forming pathways by a chemical systems biology approach "reverse pathway engineering".

    PubMed

    Liu, Mengjin; Bienfait, Bruno; Sacher, Oliver; Gasteiger, Johann; Siezen, Roland J; Nauta, Arjen; Geurts, Jan M W

    2014-01-01

    The incompleteness of genome-scale metabolic models is a major bottleneck for systems biology approaches, which are based on large numbers of metabolites as identified and quantified by metabolomics. Many of the revealed secondary metabolites and/or their derivatives, such as flavor compounds, are non-essential in metabolism, and many of their synthesis pathways are unknown. In this study, we describe a novel approach, Reverse Pathway Engineering (RPE), which combines chemoinformatics and bioinformatics analyses, to predict the "missing links" between compounds of interest and their possible metabolic precursors by providing plausible chemical and/or enzymatic reactions. We demonstrate the added-value of the approach by using flavor-forming pathways in lactic acid bacteria (LAB) as an example. Established metabolic routes leading to the formation of flavor compounds from leucine were successfully replicated. Novel reactions involved in flavor formation, i.e. the conversion of alpha-hydroxy-isocaproate to 3-methylbutanoic acid and the synthesis of dimethyl sulfide, as well as the involved enzymes were successfully predicted. These new insights into the flavor-formation mechanisms in LAB can have a significant impact on improving the control of aroma formation in fermented food products. Since the input reaction databases and compounds are highly flexible, the RPE approach can be easily extended to a broad spectrum of applications, amongst others health/disease biomarker discovery as well as synthetic biology.

  13. Combining Chemoinformatics with Bioinformatics: In Silico Prediction of Bacterial Flavor-Forming Pathways by a Chemical Systems Biology Approach “Reverse Pathway Engineering”

    PubMed Central

    Liu, Mengjin; Bienfait, Bruno; Sacher, Oliver; Gasteiger, Johann; Siezen, Roland J.; Nauta, Arjen; Geurts, Jan M. W.

    2014-01-01

    The incompleteness of genome-scale metabolic models is a major bottleneck for systems biology approaches, which are based on large numbers of metabolites as identified and quantified by metabolomics. Many of the revealed secondary metabolites and/or their derivatives, such as flavor compounds, are non-essential in metabolism, and many of their synthesis pathways are unknown. In this study, we describe a novel approach, Reverse Pathway Engineering (RPE), which combines chemoinformatics and bioinformatics analyses, to predict the “missing links” between compounds of interest and their possible metabolic precursors by providing plausible chemical and/or enzymatic reactions. We demonstrate the added-value of the approach by using flavor-forming pathways in lactic acid bacteria (LAB) as an example. Established metabolic routes leading to the formation of flavor compounds from leucine were successfully replicated. Novel reactions involved in flavor formation, i.e. the conversion of alpha-hydroxy-isocaproate to 3-methylbutanoic acid and the synthesis of dimethyl sulfide, as well as the involved enzymes were successfully predicted. These new insights into the flavor-formation mechanisms in LAB can have a significant impact on improving the control of aroma formation in fermented food products. Since the input reaction databases and compounds are highly flexible, the RPE approach can be easily extended to a broad spectrum of applications, amongst others health/disease biomarker discovery as well as synthetic biology. PMID:24416282

  14. Built environment and diabetes

    PubMed Central

    Pasala, Sudhir Kumar; Rao, Allam Appa; Sridhar, G. R.

    2010-01-01

    Development of type 2 diabetes mellitus is influenced by built environment, which is, ‘the environments that are modified by humans, including homes, schools, workplaces, highways, urban sprawls, accessibility to amenities, leisure, and pollution.’ Built environment contributes to diabetes through access to physical activity and through stress, by affecting the sleep cycle. With globalization, there is a possibility that western environmental models may be replicated in developing countries such as India, where the underlying genetic predisposition makes them particularly susceptible to diabetes. Here we review published information on the relationship between built environment and diabetes, so that appropriate modifications can be incorporated to reduce the risk of developing diabetes mellitus. PMID:20535308

  15. Translational bioinformatics in psychoneuroimmunology: methods and applications.

    PubMed

    Yan, Qing

    2012-01-01

    Translational bioinformatics plays an indispensable role in transforming psychoneuroimmunology (PNI) into personalized medicine. It provides a powerful method to bridge the gaps between various knowledge domains in PNI and systems biology. Translational bioinformatics methods at various systems levels can facilitate pattern recognition, and expedite and validate the discovery of systemic biomarkers to allow their incorporation into clinical trials and outcome assessments. Analysis of the correlations between genotypes and phenotypes including the behavioral-based profiles will contribute to the transition from the disease-based medicine to human-centered medicine. Translational bioinformatics would also enable the establishment of predictive models for patient responses to diseases, vaccines, and drugs. In PNI research, the development of systems biology models such as those of the neurons would play a critical role. Methods based on data integration, data mining, and knowledge representation are essential elements in building health information systems such as electronic health records and computerized decision support systems. Data integration of genes, pathophysiology, and behaviors are needed for a broad range of PNI studies. Knowledge discovery approaches such as network-based systems biology methods are valuable in studying the cross-talks among pathways in various brain regions involved in disorders such as Alzheimer's disease.

  16. Bioinformatics meets clinical informatics.

    PubMed

    Smith, Jeremy; Protti, Denis

    2005-01-01

    The field of bioinformatics has exploded over the past decade. Hopes have run high for the impact on preventive, diagnostic, and therapeutic capabilities of genomics and proteomics. As time has progressed, so has our understanding of this field. Although the mapping of the human genome will certainly have an impact on health care, it is a complex web to unweave. Addressing simpler "Single Nucleotide Polymorphisms" (SNPs) is not new, however, the complexity and importance of polygenic disorders and the greater role of the far more complex field of proteomics has become more clear. Proteomics operates much closer to the actual cellular level of human structure and proteins are very sensitive markers of health. Because the proteome, however, is so much more complex than the genome, and changes with time and environmental factors, mapping it and using the data in direct care delivery is even harder than for the genome. For these reasons of complexity, the expected utopia of a single gene chip or protein chip capable of analyzing an individual's genetic make-up and producing a cornucopia of useful diagnostic information appears still a distant hope. When, and if, this happens, perhaps a genetic profile of each individual will be stored with their medical record; however, in the mean time, this type of information is unlikely to prove highly useful on a broad scale. To address the more complex "polygenic" diseases and those related to protein variations, other tools will be developed in the shorter term. "Top-down" analysis of populations and diseases is likely to produce earlier wins in this area. Detailed computer-generated models will map a wide array of human and environmental factors that indicate the presence of a disease or the relative impact of a particular treatment. These models may point to an underlying genomic or proteomic cause, for which genomic or proteomic testing or therapies could then be applied for confirmation and/or treatment. These types of

  17. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    PubMed

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  18. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software

    PubMed Central

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  19. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, J.L.

    1995-04-11

    A laser initiated ordnance controller apparatus which provides a safe and arm scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition. 6 figures.

  20. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, James L.

    1995-01-01

    A laser initiated ordnance controller apparatus which provides a safe and m scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition.

  1. No moving parts safe and arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    SciTech Connect

    Hendrix, J.L.

    1994-12-31

    A laser initiated ordnance controller apparatus which provides a safe and arm scheme with no moving parts. The safe and arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe and arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activated the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel.

  2. Survey of MapReduce frame operation in bioinformatics.

    PubMed

    Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke

    2014-07-01

    Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics.

  3. Survey of MapReduce frame operation in bioinformatics.

    PubMed

    Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke

    2014-07-01

    Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. PMID:23396756

  4. Integration of bioinformatics to biodegradation

    PubMed Central

    2014-01-01

    Bioinformatics and biodegradation are two primary scientific fields in applied microbiology and biotechnology. The present review describes development of various bioinformatics tools that may be applied in the field of biodegradation. Several databases, including the University of Minnesota Biocatalysis/Biodegradation database (UM-BBD), a database of biodegradative oxygenases (OxDBase), Biodegradation Network-Molecular Biology Database (Bionemo) MetaCyc, and BioCyc have been developed to enable access to information related to biochemistry and genetics of microbial degradation. In addition, several bioinformatics tools for predicting toxicity and biodegradation of chemicals have been developed. Furthermore, the whole genomes of several potential degrading bacteria have been sequenced and annotated using bioinformatics tools. PMID:24808763

  5. The Genome Sequencer FLX System--longer reads, more applications, straight forward bioinformatics and more complete data sets.

    PubMed

    Droege, Marcus; Hill, Brendon

    2008-08-31

    The Genome Sequencer FLX System (GS FLX), powered by 454 Sequencing, is a next-generation DNA sequencing technology featuring a unique mix of long reads, exceptional accuracy, and ultra-high throughput. It has been proven to be the most versatile of all currently available next-generation sequencing technologies, supporting many high-profile studies in over seven applications categories. GS FLX users have pursued innovative research in de novo sequencing, re-sequencing of whole genomes and target DNA regions, metagenomics, and RNA analysis. 454 Sequencing is a powerful tool for human genetics research, having recently re-sequenced the genome of an individual human, currently re-sequencing the complete human exome and targeted genomic regions using the NimbleGen sequence capture process, and detected low-frequency somatic mutations linked to cancer. PMID:18616967

  6. Autophagy Regulatory Network - a systems-level bioinformatics resource for studying the mechanism and regulation of autophagy.

    PubMed

    Türei, Dénes; Földvári-Nagy, László; Fazekas, Dávid; Módos, Dezső; Kubisch, János; Kadlecsik, Tamás; Demeter, Amanda; Lenti, Katalin; Csermely, Péter; Vellai, Tibor; Korcsmáros, Tamás

    2015-01-01

    Autophagy is a complex cellular process having multiple roles, depending on tissue, physiological, or pathological conditions. Major post-translational regulators of autophagy are well known, however, they have not yet been collected comprehensively. The precise and context-dependent regulation of autophagy necessitates additional regulators, including transcriptional and post-transcriptional components that are listed in various datasets. Prompted by the lack of systems-level autophagy-related information, we manually collected the literature and integrated external resources to gain a high coverage autophagy database. We developed an online resource, Autophagy Regulatory Network (ARN; http://autophagy-regulation.org), to provide an integrated and systems-level database for autophagy research. ARN contains manually curated, imported, and predicted interactions of autophagy components (1,485 proteins with 4,013 interactions) in humans. We listed 413 transcription factors and 386 miRNAs that could regulate autophagy components or their protein regulators. We also connected the above-mentioned autophagy components and regulators with signaling pathways from the SignaLink 2 resource. The user-friendly website of ARN allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. ARN has the potential to facilitate the experimental validation of novel autophagy components and regulators. In addition, ARN helps the investigation of transcription factors, miRNAs and signaling pathways implicated in the control of the autophagic pathway. The list of such known and predicted regulators could be important in pharmacological attempts against cancer and neurodegenerative diseases.

  7. Autophagy Regulatory Network - a systems-level bioinformatics resource for studying the mechanism and regulation of autophagy.

    PubMed

    Türei, Dénes; Földvári-Nagy, László; Fazekas, Dávid; Módos, Dezső; Kubisch, János; Kadlecsik, Tamás; Demeter, Amanda; Lenti, Katalin; Csermely, Péter; Vellai, Tibor; Korcsmáros, Tamás

    2015-01-01

    Autophagy is a complex cellular process having multiple roles, depending on tissue, physiological, or pathological conditions. Major post-translational regulators of autophagy are well known, however, they have not yet been collected comprehensively. The precise and context-dependent regulation of autophagy necessitates additional regulators, including transcriptional and post-transcriptional components that are listed in various datasets. Prompted by the lack of systems-level autophagy-related information, we manually collected the literature and integrated external resources to gain a high coverage autophagy database. We developed an online resource, Autophagy Regulatory Network (ARN; http://autophagy-regulation.org), to provide an integrated and systems-level database for autophagy research. ARN contains manually curated, imported, and predicted interactions of autophagy components (1,485 proteins with 4,013 interactions) in humans. We listed 413 transcription factors and 386 miRNAs that could regulate autophagy components or their protein regulators. We also connected the above-mentioned autophagy components and regulators with signaling pathways from the SignaLink 2 resource. The user-friendly website of ARN allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. ARN has the potential to facilitate the experimental validation of novel autophagy components and regulators. In addition, ARN helps the investigation of transcription factors, miRNAs and signaling pathways implicated in the control of the autophagic pathway. The list of such known and predicted regulators could be important in pharmacological attempts against cancer and neurodegenerative diseases. PMID:25635527

  8. The Resilience of Structure Built around the Predicate: Homesign Gesture Systems in Turkish and American Deaf Children

    ERIC Educational Resources Information Center

    Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Asli; Sancar, Burcu

    2015-01-01

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called "homesigns", which have many of the properties of natural language--the so-called resilient properties of language. We explored the resilience of structure built…

  9. The Space Launch System -The Biggest, Most Capable Rocket Ever Built, for Entirely New Human Exploration Missions Beyond Earth's Orbit

    NASA Technical Reports Server (NTRS)

    Shivers, C. Herb

    2012-01-01

    NASA is developing the Space Launch System -- an advanced heavy-lift launch vehicle that will provide an entirely new capability for human exploration beyond Earth's orbit. The Space Launch System will provide a safe, affordable and sustainable means of reaching beyond our current limits and opening up new discoveries from the unique vantage point of space. The first developmental flight, or mission, is targeted for the end of 2017. The Space Launch System, or SLS, will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment and science experiments to Earth's orbit and destinations beyond. Additionally, the SLS will serve as a backup for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle Program and the Constellation Program in order to take advantage of proven hardware and cutting-edge tooling and manufacturing technology that will significantly reduce development and operations costs. The rocket will use a liquid hydrogen and liquid oxygen propulsion system, which will include the RS-25D/E from the Space Shuttle Program for the core stage and the J-2X engine for the upper stage. SLS will also use solid rocket boosters for the initial development flights, while follow-on boosters will be competed based on performance requirements and affordability considerations.

  10. VLSI Microsystem for Rapid Bioinformatic Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Lue, Jaw-Chyng

    2009-01-01

    A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).

  11. Towards an International Planetary Community Built on Open Source Software: the Evolution of the Planetary Data System

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Ramirez, P.; Hardman, S.; Hughes, J. S.

    2012-12-01

    Access to the worldwide planetary science research results from robotic exploration of the solar system has become a key driver in internationalizing the data standards from the Planetary Data System. The Planetary Data System, through international agency collaborations with the International Planetary Data Alliance (IPDA), has been developing a next generation set of data standards and technical implementation known as PDS4. PDS4 modernizes the PDS towards a world-wide online data system providing data and technical standards for improving access and interoperability among planetary archives. Since 2006, the IPDA has been working with the PDS to ensure that the next generation PDS is capable of allowing agency autonomy in building compatible archives while providing mechanisms to link the archive together. At the 7th International Planetary Data Alliance (IPDA) Meeting in Bangalore, India, the IPDA discussed and passed a resolution paving the way to adopt the PDS4 data standards. While the PDS4 standards have matured, another effort has been underway to move the PDS, a set of distributed discipline oriented science nodes, into a fully, online, service-oriented architecture. In order to accomplish this goal, the PDS has been developing a core set of software components that form the basis for many of the functions needed by a data system. These include the ability to harvest, validate, register, search and distribute the data products defined by the PDS4 data standards. Rather than having each group build their own independent implementations, the intention is to ultimately govern the implementation of this software through an open source community. This will enable not only sharing of software among U.S. planetary science nodes, but also has the potential of improving collaboration not only on core data management software, but also the tools by the international community. This presentation will discuss the progress in developing an open source infrastructure

  12. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    SciTech Connect

    Chao, R.M.; Ko, S.H.; Lin, I.H.; Pai, F.S.; Chang, C.C.

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  13. Natural and built environmental exposures on children's active school travel: A Dutch global positioning system-based cross-sectional study.

    PubMed

    Helbich, Marco; Emmichoven, Maarten J Zeylmans van; Dijst, Martin J; Kwan, Mei-Po; Pierik, Frank H; Vries, Sanne I de

    2016-05-01

    Physical inactivity among children is on the rise. Active transport to school (ATS), namely walking and cycling there, adds to children's activity level. Little is known about how exposures along actual routes influence children's transport behavior. This study examined how natural and built environments influence mode choice among Dutch children aged 6-11 years. 623 school trips were tracked with global positioning system. Natural and built environmental exposures were determined by means of a geographic information system and their associations with children's active/passive mode choice were analyzed using mixed models. The actual commuted distance is inversely associated with ATS when only personal, traffic safety, and weather features are considered. When the model is adjusted for urban environments, the results are reversed and distance is no longer significant, whereas well-connected streets and cycling lanes are positively associated with ATS. Neither green space nor weather is significant. As distance is not apparent as a constraining travel determinant when moving through urban landscapes, planning authorities should support children's ATS by providing well-designed cities.

  14. Natural and built environmental exposures on children's active school travel: A Dutch global positioning system-based cross-sectional study.

    PubMed

    Helbich, Marco; Emmichoven, Maarten J Zeylmans van; Dijst, Martin J; Kwan, Mei-Po; Pierik, Frank H; Vries, Sanne I de

    2016-05-01

    Physical inactivity among children is on the rise. Active transport to school (ATS), namely walking and cycling there, adds to children's activity level. Little is known about how exposures along actual routes influence children's transport behavior. This study examined how natural and built environments influence mode choice among Dutch children aged 6-11 years. 623 school trips were tracked with global positioning system. Natural and built environmental exposures were determined by means of a geographic information system and their associations with children's active/passive mode choice were analyzed using mixed models. The actual commuted distance is inversely associated with ATS when only personal, traffic safety, and weather features are considered. When the model is adjusted for urban environments, the results are reversed and distance is no longer significant, whereas well-connected streets and cycling lanes are positively associated with ATS. Neither green space nor weather is significant. As distance is not apparent as a constraining travel determinant when moving through urban landscapes, planning authorities should support children's ATS by providing well-designed cities. PMID:27010106

  15. Development of kinematic 3D laser scanning system for indoor mapping and as-built BIM using constrained SLAM.

    PubMed

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy.

  16. Development of Kinematic 3D Laser Scanning System for Indoor Mapping and As-Built BIM Using Constrained SLAM

    PubMed Central

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  17. Development of kinematic 3D laser scanning system for indoor mapping and as-built BIM using constrained SLAM.

    PubMed

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  18. E-Learning as a new tool in bioinformatics teaching.

    PubMed

    Saravanan, Vijayakumar; Shanmughavel, Piramanayagam

    2007-11-01

    In recent years, virtual learning is growing rapidly. Universities, colleges, and secondary schools are now delivering training and education over the internet. Beside this, resources available over the WWW are huge and understanding the various techniques employed in the field of Bioinformatics is increasingly complex for students during implementation. Here, we discuss its importance in developing and delivering an educational system in Bioinformatics based on e-learning environment.

  19. E-Learning as a new tool in bioinformatics teaching

    PubMed Central

    Saravanan, Vijayakumar; Shanmughavel, Piramanayagam

    2007-01-01

    In recent years, virtual learning is growing rapidly. Universities, colleges, and secondary schools are now delivering training and education over the internet. Beside this, resources available over the WWW are huge and understanding the various techniques employed in the field of Bioinformatics is increasingly complex for students during implementation. Here, we discuss its importance in developing and delivering an educational system in Bioinformatics based on e-learning environment. PMID:18292800

  20. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  1. Bioinformatics resource manager v2.3: an integrated software environment for systems biology with microRNA and cross-species analysis tools

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf) results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p<0.05) gene targets in BRM indicates that nicotine exposure disrupts genes involved in neurogenesis, possibly through misregulation of nicotine-sensitive miRNAs. Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis

  2. The potential of translational bioinformatics approaches for pharmacology research

    PubMed Central

    Li, Lang

    2015-01-01

    The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of ‘omics’ to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines. PMID:25753093

  3. Bioinformatics of cardiovascular miRNA biology.

    PubMed

    Kunz, Meik; Xiao, Ke; Liang, Chunguang; Viereck, Janika; Pachel, Christina; Frantz, Stefan; Thum, Thomas; Dandekar, Thomas

    2015-12-01

    MicroRNAs (miRNAs) are small ~22 nucleotide non-coding RNAs and are highly conserved among species. Moreover, miRNAs regulate gene expression of a large number of genes associated with important biological functions and signaling pathways. Recently, several miRNAs have been found to be associated with cardiovascular diseases. Thus, investigating the complex regulatory effect of miRNAs may lead to a better understanding of their functional role in the heart. To achieve this, bioinformatics approaches have to be coupled with validation and screening experiments to understand the complex interactions of miRNAs with the genome. This will boost the subsequent development of diagnostic markers and our understanding of the physiological and therapeutic role of miRNAs in cardiac remodeling. In this review, we focus on and explain different bioinformatics strategies and algorithms for the identification and analysis of miRNAs and their regulatory elements to better understand cardiac miRNA biology. Starting with the biogenesis of miRNAs, we present approaches such as LocARNA and miRBase for combining sequence and structure analysis including phylogenetic comparisons as well as detailed analysis of RNA folding patterns, functional target prediction, signaling pathway as well as functional analysis. We also show how far bioinformatics helps to tackle the unprecedented level of complexity and systemic effects by miRNA, underlining the strong therapeutic potential of miRNA and miRNA target structures in cardiovascular disease. In addition, we discuss drawbacks and limitations of bioinformatics algorithms and the necessity of experimental approaches for miRNA target identification. This article is part of a Special Issue entitled 'Non-coding RNAs'.

  4. Chapter 16: text mining for translational bioinformatics.

    PubMed

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  5. Bioinformatic pipelines in Python with Leaf

    PubMed Central

    2013-01-01

    Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Results Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user’s Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Conclusions Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools. PMID:23786315

  6. Chapter 16: Text Mining for Translational Bioinformatics

    PubMed Central

    Cohen, K. Bretonnel; Hunter, Lawrence E.

    2013-01-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research—translating basic science results into new interventions—and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing. PMID:23633944

  7. Generations of interdisciplinarity in bioinformatics

    PubMed Central

    Bartlett, Andrew; Lewis, Jamie; Williams, Matthew L.

    2016-01-01

    Bioinformatics, a specialism propelled into relevance by the Human Genome Project and the subsequent -omic turn in the life science, is an interdisciplinary field of research. Qualitative work on the disciplinary identities of bioinformaticians has revealed the tensions involved in work in this “borderland.” As part of our ongoing work on the emergence of bioinformatics, between 2010 and 2011, we conducted a survey of United Kingdom-based academic bioinformaticians. Building on insights drawn from our fieldwork over the past decade, we present results from this survey relevant to a discussion of disciplinary generation and stabilization. Not only is there evidence of an attitudinal divide between the different disciplinary cultures that make up bioinformatics, but there are distinctions between the forerunners, founders and the followers; as inter/disciplines mature, they face challenges that are both inter-disciplinary and inter-generational in nature. PMID:27453689

  8. Built to disappear.

    PubMed

    Bauer, Siegfried; Kaltenbrunner, Martin

    2014-06-24

    Microelectronics dominates the technological and commercial landscape of today's electronics industry; ultrahigh density integrated circuits on rigid silicon provide the computing power for smart appliances that help us organize our daily lives. Integrated circuits function flawlessly for decades, yet we like to replace smart phones and tablet computers every year. Disposable electronics, built to disappear in a controlled fashion after the intended lifespan, may be one of the potential applications of transient single-crystalline silicon nanomembranes, reported by Hwang et al. in this issue of ACS Nano. We briefly outline the development of this latest branch of electronics research, and we present some prospects for future developments. Electronics is steadily evolving, and 20 years from now we may find it perfectly normal for smart appliances to be embedded everywhere, on textiles, on our skin, and even in our body. PMID:24892500

  9. Recording and data transmission system of an IR lidar built around IBM-PC/AT/386/486 and intended for vertical sounding of tropospheric ozone

    SciTech Connect

    Rostov, A.P.

    1993-05-01

    A modification of design of a lidar recording system built around an IBM personal computer has been proposed. The modern lidars applied to investigation of the atmosphere comprise, as a rule, a powerful pulsed laser being a source of high-power electromagnetic noise. The noise power can be so high that this renders the PC operation in the neighborhood of a lidar location impossible. Taking this into account, the author developed several instrumentation-program complexes for different lidars. A system of an IR bifrequency sequential lidar intended for vertical sounding of tropospheric ozone is described. Experimental operation of the system as part of LOZA lidars and setup intended for investigation of the atmospheric turbulence has shown convenience and high reliability of this experimental system design as well as its adaptability and mobility. The development of an intellectual controller of lidar systems is being completed at present. It will allow one to exchange instructions and messages at distances up to 500m between the PC and lidar in an interactive mode using a standard IBM/PC communication port.

  10. The 2015 Bioinformatics Open Source Conference (BOSC 2015)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J. A.; Lapp, Hilmar

    2016-01-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included “Data Science;” “Standards and Interoperability;” “Open Science and Reproducibility;” “Translational Bioinformatics;” “Visualization;” and “Bioinformatics Open Source Project Updates”. In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled “Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community,” that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule. PMID:26914653

  11. Mathematics and evolutionary biology make bioinformatics education comprehensible.

    PubMed

    Jungck, John R; Weisstein, Anton E

    2013-09-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.

  12. Mathematics and evolutionary biology make bioinformatics education comprehensible

    PubMed Central

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  13. Visualising "Junk" DNA through Bioinformatics

    ERIC Educational Resources Information Center

    Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia

    2005-01-01

    One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…

  14. Reproducible Bioinformatics Research for Biologists

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  15. Bioinformatics and the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  16. Developing library bioinformatics services in context: the Purdue University Libraries bioinformationist program

    PubMed Central

    Rein, Diane C.

    2006-01-01

    Setting: Purdue University is a major agricultural, engineering, biomedical, and applied life science research institution with an increasing focus on bioinformatics research that spans multiple disciplines and campus academic units. The Purdue University Libraries (PUL) hired a molecular biosciences specialist to discover, engage, and support bioinformatics needs across the campus. Program Components: After an extended period of information needs assessment and environmental scanning, the specialist developed a week of focused bioinformatics instruction (Bioinformatics Week) to launch system-wide, library-based bioinformatics services. Evaluation Mechanisms: The specialist employed a two-tiered approach to assess user information requirements and expectations. The first phase involved careful observation and collection of information needs in-context throughout the campus, attending laboratory meetings, interviewing department chairs and individual researchers, and engaging in strategic planning efforts. Based on the information gathered during the integration phase, several survey instruments were developed to facilitate more critical user assessment and the recovery of quantifiable data prior to planning. Next Steps/Future Directions: Given information gathered while working with clients and through formal needs assessments, as well as the success of instructional approaches used in Bioinformatics Week, the specialist is developing bioinformatics support services for the Purdue community. The specialist is also engaged in training PUL faculty librarians in bioinformatics to provide a sustaining culture of library-based bioinformatics support and understanding of Purdue's bioinformatics-related decision and policy making. PMID:16888666

  17. An agent-based multilayer architecture for bioinformatics grids.

    PubMed

    Bartocci, Ezio; Cacciagrano, Diletta; Cannata, Nicola; Corradini, Flavio; Merelli, Emanuela; Milanesi, Luciano; Romano, Paolo

    2007-06-01

    Due to the huge volume and complexity of biological data available today, a fundamental component of biomedical research is now in silico analysis. This includes modelling and simulation of biological systems and processes, as well as automated bioinformatics analysis of high-throughput data. The quest for bioinformatics resources (including databases, tools, and knowledge) becomes therefore of extreme importance. Bioinformatics itself is in rapid evolution and dedicated Grid cyberinfrastructures already offer easier access and sharing of resources. Furthermore, the concept of the Grid is progressively interleaving with those of Web Services, semantics, and software agents. Agent-based systems can play a key role in learning, planning, interaction, and coordination. Agents constitute also a natural paradigm to engineer simulations of complex systems like the molecular ones. We present here an agent-based, multilayer architecture for bioinformatics Grids. It is intended to support both the execution of complex in silico experiments and the simulation of biological systems. In the architecture a pivotal role is assigned to an "alive" semantic index of resources, which is also expected to facilitate users' awareness of the bioinformatics domain.

  18. No-boundary thinking in bioinformatics research.

    PubMed

    Huang, Xiuzhen; Bruce, Barry; Buchan, Alison; Congdon, Clare Bates; Cramer, Carole L; Jennings, Steven F; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Rick; Moore, Jason H; Nanduri, Bindu; Peckham, Joan; Perkins, Andy; Polson, Shawn W; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhao, Zhongming

    2013-11-06

    Currently there are definitions from many agencies and research societies defining "bioinformatics" as deriving knowledge from computational analysis of large volumes of biological and biomedical data. Should this be the bioinformatics research focus? We will discuss this issue in this review article. We would like to promote the idea of supporting human-infrastructure (HI) with no-boundary thinking (NT) in bioinformatics (HINT).

  19. Mobyle: a new full web bioinformatics framework

    PubMed Central

    Néron, Bertrand; Ménager, Hervé; Maufrais, Corinne; Joly, Nicolas; Maupetit, Julien; Letort, Sébastien; Carrere, Sébastien; Tuffery, Pierre; Letondal, Catherine

    2009-01-01

    Motivation: For the biologist, running bioinformatics analyses involves a time-consuming management of data and tools. Users need support to organize their work, retrieve parameters and reproduce their analyses. They also need to be able to combine their analytic tools using a safe data flow software mechanism. Finally, given that scientific tools can be difficult to install, it is particularly helpful for biologists to be able to use these tools through a web user interface. However, providing a web interface for a set of tools raises the problem that a single web portal cannot offer all the existing and possible services: it is the user, again, who has to cope with data copy among a number of different services. A framework enabling portal administrators to build a network of cooperating services would therefore clearly be beneficial. Results: We have designed a system, Mobyle, to provide a flexible and usable Web environment for defining and running bioinformatics analyses. It embeds simple yet powerful data management features that allow the user to reproduce analyses and to combine tools using a hierarchical typing system. Mobyle offers invocation of services distributed over remote Mobyle servers, thus enabling a federated network of curated bioinformatics portals without the user having to learn complex concepts or to install sophisticated software. While being focused on the end user, the Mobyle system also addresses the need, for the bioinfomatician, to automate remote services execution: PlayMOBY is a companion tool that automates the publication of BioMOBY web services, using Mobyle program definitions. Availability: The Mobyle system is distributed under the terms of the GNU GPLv2 on the project web site (http://bioweb2.pasteur.fr/projects/mobyle/). It is already deployed on three servers: http://mobyle.pasteur.fr, http://mobyle.rpbs.univ-paris-diderot.fr and http://lipm-bioinfo.toulouse.inra.fr/Mobyle. The PlayMOBY companion is distributed under the

  20. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  1. Active vision and image/video understanding systems built upon network-symbolic models for perception-based navigation of mobile robots in real-world environments

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-12-01

    To be completely successful, robots need to have reliable perceptual systems that are similar to human vision. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects with respect to the observer and to each other. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views. Once built, the model of visual scene changes slower then local information in the visual buffer. It allows for disambiguating visual information and effective control of actions and navigation via incremental relational changes in visual buffer. Network-Symbolic models can be seamlessly integrated into the NIST 4D/RCS architecture and better interpret images/video for situation awareness, target recognition, navigation and actions.

  2. Bioinformatics in the information age

    SciTech Connect

    Spengler, Sylvia J.

    2000-02-01

    There is a well-known story about the blind man examining the elephant: the part of the elephant examined determines his perception of the whole beast. Perhaps bioinformatics--the shotgun marriage between biology and mathematics, computer science, and engineering--is like an elephant that occupies a large chair in the scientific living room. Given the demand for and shortage of researchers with the computer skills to handle large volumes of biological data, where exactly does the bioinformatics elephant sit? There are probably many biologists who feel that a major product of this bioinformatics elephant is large piles of waste material. If you have tried to plow through Web sites and software packages in search of a specific tool for analyzing and collating large amounts of research data, you may well feel the same way. But there has been progress with major initiatives to develop more computing power, educate biologists about computers, increase funding, and set standards. For our purposes, bioinformatics is not simply a biologically inclined rehash of information theory (1) nor is it a hodgepodge of computer science techniques for building, updating, and accessing biological data. Rather bioinformatics incorporates both of these capabilities into a broad interdisciplinary science that involves both conceptual and practical tools for the understanding, generation, processing, and propagation of biological information. As such, bioinformatics is the sine qua non of 21st-century biology. Analyzing gene expression using cDNA microarrays immobilized on slides or other solid supports (gene chips) is set to revolutionize biology and medicine and, in so doing, generate vast quantities of data that have to be accurately interpreted (Fig. 1). As discussed at a meeting a few months ago (Microarray Algorithms and Statistical Analysis: Methods and Standards; Tahoe City, California; 9-12 November 1999), experiments with cDNA arrays must be subjected to quality control

  3. Tools and collaborative environments for bioinformatics research

    PubMed Central

    Giugno, Rosalba; Pulvirenti, Alfredo

    2011-01-01

    Advanced research requires intensive interaction among a multitude of actors, often possessing different expertise and usually working at a distance from each other. The field of collaborative research aims to establish suitable models and technologies to properly support these interactions. In this article, we first present the reasons for an interest of Bioinformatics in this context by also suggesting some research domains that could benefit from collaborative research. We then review the principles and some of the most relevant applications of social networking, with a special attention to networks supporting scientific collaboration, by also highlighting some critical issues, such as identification of users and standardization of formats. We then introduce some systems for collaborative document creation, including wiki systems and tools for ontology development, and review some of the most interesting biological wikis. We also review the principles of Collaborative Development Environments for software and show some examples in Bioinformatics. Finally, we present the principles and some examples of Learning Management Systems. In conclusion, we try to devise some of the goals to be achieved in the short term for the exploitation of these technologies. PMID:21984743

  4. Effect of electrode position on azo dye removal in an up-flow hybrid anaerobic digestion reactor with built-in bioelectrochemical system

    PubMed Central

    Cui, Min-Hua; Cui, Dan; Lee, Hyung-Sool; Liang, Bin; Wang, Ai-Jie; Cheng, Hao-Yi

    2016-01-01

    In this study, two modes of hybrid anaerobic digestion (AD) bioreactor with built-in BESs (electrodes installed in liquid phase (R1) and sludge phase (R2)) were tested for identifying the effect of electrodes position on azo dye wastewater treatment. Alizarin yellow R (AYR) was used as a model dye. Decolorization efficiency of R1 was 90.41 ± 6.20% at influent loading rate of 800 g-AYR/ m3·d, which was 39% higher than that of R2. The contribution of bioelectrochemical reduction to AYR decolorization (16.23 ± 1.86% for R1 versus 22.24 ± 2.14% for R2) implied that although azo dye was mainly removed in sludge zone, BES further improved the effluent quality, especially for R1 where electrodes were installed in liquid phase. The microbial communities in the electrode biofilms (dominant by Enterobacter) and sludge (dominant by Enterococcus) were well distinguished in R1, but they were similar in R2. These results suggest that electrodes installed in liquid phase in the anaerobic hybrid system are more efficient than that in sludge phase for azo dye removal, which give great inspirations for the application of AD-BES hybrid process for various refractory wastewaters treatment. PMID:27121278

  5. Effect of electrode position on azo dye removal in an up-flow hybrid anaerobic digestion reactor with built-in bioelectrochemical system.

    PubMed

    Cui, Min-Hua; Cui, Dan; Lee, Hyung-Sool; Liang, Bin; Wang, Ai-Jie; Cheng, Hao-Yi

    2016-04-28

    In this study, two modes of hybrid anaerobic digestion (AD) bioreactor with built-in BESs (electrodes installed in liquid phase (R1) and sludge phase (R2)) were tested for identifying the effect of electrodes position on azo dye wastewater treatment. Alizarin yellow R (AYR) was used as a model dye. Decolorization efficiency of R1 was 90.41 ± 6.20% at influent loading rate of 800 g-AYR/ m(3)·d, which was 39% higher than that of R2. The contribution of bioelectrochemical reduction to AYR decolorization (16.23 ± 1.86% for R1 versus 22.24 ± 2.14% for R2) implied that although azo dye was mainly removed in sludge zone, BES further improved the effluent quality, especially for R1 where electrodes were installed in liquid phase. The microbial communities in the electrode biofilms (dominant by Enterobacter) and sludge (dominant by Enterococcus) were well distinguished in R1, but they were similar in R2. These results suggest that electrodes installed in liquid phase in the anaerobic hybrid system are more efficient than that in sludge phase for azo dye removal, which give great inspirations for the application of AD-BES hybrid process for various refractory wastewaters treatment.

  6. Using Bioinformatics Approach to Explore the Pharmacological Mechanisms of Multiple Ingredients in Shuang-Huang-Lian.

    PubMed

    Zhang, Bai-xia; Li, Jian; Gu, Hao; Li, Qiang; Zhang, Qi; Zhang, Tian-jiao; Wang, Yun; Cai, Cheng-ke

    2015-01-01

    Due to the proved clinical efficacy, Shuang-Huang-Lian (SHL) has developed a variety of dosage forms. However, the in-depth research on targets and pharmacological mechanisms of SHL preparations was scarce. In the presented study, the bioinformatics approaches were adopted to integrate relevant data and biological information. As a result, a PPI network was built and the common topological parameters were characterized. The results suggested that the PPI network of SHL exhibited a scale-free property and modular architecture. The drug target network of SHL was structured with 21 functional modules. According to certain modules and pharmacological effects distribution, an antitumor effect and potential drug targets were predicted. A biological network which contained 26 subnetworks was constructed to elucidate the antipneumonia mechanism of SHL. We also extracted the subnetwork to explicitly display the pathway where one effective component acts on the pneumonia related targets. In conclusions, a bioinformatics approach was established for exploring the drug targets, pharmacological activity distribution, effective components of SHL, and its mechanism of antipneumonia. Above all, we identified the effective components and disclosed the mechanism of SHL from the view of system. PMID:26495421

  7. Protein bioinformatics applied to virology.

    PubMed

    Mohabatkar, Hassan; Keyhanfar, Mehrnaz; Behbahani, Mandana

    2012-09-01

    Scientists have united in a common search to sequence, store and analyze genes and proteins. In this regard, rapidly evolving bioinformatics methods are providing valuable information on these newly-discovered molecules. Understanding what has been done and what we can do in silico is essential in designing new experiments. The unbalanced situation between sequence-known proteins and attribute-known proteins, has called for developing computational methods or high-throughput automated tools for fast and reliably predicting or identifying various characteristics of uncharacterized proteins. Taking into consideration the role of viruses in causing diseases and their use in biotechnology, the present review describes the application of protein bioinformatics in virology. Therefore, a number of important features of viral proteins like epitope prediction, protein docking, subcellular localization, viral protease cleavage sites and computer based comparison of their aspects have been discussed. This paper also describes several tools, principally developed for viral bioinformatics. Prediction of viral protein features and learning the advances in this field can help basic understanding of the relationship between a virus and its host.

  8. Nuclear reactors built, being built, or planned 1992

    SciTech Connect

    Not Available

    1993-07-01

    Nuclear Reactors Built, Being Built, or Planned contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1992. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. Information is presented on five parts: Civilian, Production, Military, Export and Critical Assembly.

  9. Bioinformatic Primer for Clinical and Translational Science

    PubMed Central

    Faustino, Randolph S.; Chiriac, Anca; Terzic, Andre

    2009-01-01

    The advent of high-throughput technologies has accelerated generation and expansion of genomic, transcriptomic, and proteomic data. Acquisition of high-dimensional datasets requires archival systems that permit efficiency of storage and retrieval, and so, multiple electronic repositories have been initiated and maintained to meet this demand. Bioinformatic science has evolved, from these intricate bodies of dynamically updated information and the tools to manage them, as a necessity to harness and decipher the inherent complexity of high-volume data. Large datasets are associated with a variable degree of stochastic noise that contributes to the balance of an ordered, multistable state with the capacity to evolve in response to stimulus, thus exhibiting a hallmark feature of biological criticality. In this context, the network theory has become an invaluable tool to map relationships that integrate discrete elements that collectively direct global function within a particular –omic category, and indeed, the prioritized focus on the functional whole of the genomic, transcriptomic, or proteomic strata over single molecules is a primary tenet of systems biology analyses. This new biology perspective allows inspection and prediction of disease conditions, not limited to a monogenic challenge, but as a combination of individualized molecular permutations acting in concert to effect a phenotypic outcome. Bioinformatic integration of multidimensional data within and between biological layers thus harbors the potential to identify unique biological signatures, providing an enabling platform for advances in clinical and translational science. PMID:19690627

  10. Nuclear reactors built, being built, or planned 1993

    SciTech Connect

    Not Available

    1993-08-01

    Nuclear Reactors Built, Being Built, or Planned contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1993. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: civilian, production, military, export, and critical assembly.

  11. ExPASy: SIB bioinformatics resource portal.

    PubMed

    Artimo, Panu; Jonnalagedda, Manohar; Arnold, Konstantin; Baratin, Delphine; Csardi, Gabor; de Castro, Edouard; Duvaud, Séverine; Flegel, Volker; Fortier, Arnaud; Gasteiger, Elisabeth; Grosdidier, Aurélien; Hernandez, Céline; Ioannidis, Vassilios; Kuznetsov, Dmitry; Liechti, Robin; Moretti, Sébastien; Mostaguir, Khaled; Redaschi, Nicole; Rossier, Grégoire; Xenarios, Ioannis; Stockinger, Heinz

    2012-07-01

    ExPASy (http://www.expasy.org) has worldwide reputation as one of the main bioinformatics resources for proteomics. It has now evolved, becoming an extensible and integrative portal accessing many scientific resources, databases and software tools in different areas of life sciences. Scientists can henceforth access seamlessly a wide range of resources in many different domains, such as proteomics, genomics, phylogeny/evolution, systems biology, population genetics, transcriptomics, etc. The individual resources (databases, web-based and downloadable software tools) are hosted in a 'decentralized' way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Specifically, a single web portal provides a common entry point to a wide range of resources developed and operated by different SIB groups and external institutions. The portal features a search function across 'selected' resources. Additionally, the availability and usage of resources are monitored. The portal is aimed for both expert users and people who are not familiar with a specific domain in life sciences. The new web interface provides, in particular, visual guidance for newcomers to ExPASy.

  12. Bioinformatics in Africa: The Rise of Ghana?

    PubMed

    Karikari, Thomas K

    2015-09-01

    Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.

  13. Bioinformatics in Africa: The Rise of Ghana?

    PubMed Central

    Karikari, Thomas K.

    2015-01-01

    Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics. PMID:26378921

  14. [Bioinformatics: a key role in oncology].

    PubMed

    Olivier, Timothée; Chappuis, Pierre; Tsantoulis, Petros

    2016-05-18

    Bioinformatics is essential in clinical oncology and research. Combining biology, computer science and mathematics, bioinformatics aims to derive useful information from clinical and biological data, often poorly structured, at a large scale. Bioinformatics approaches have reclassified certain cancers based on their molecular and biological presentation, improving treatment selection. Many molecular signatures have been developed and, after validation, some are now usable in clinical practice. Other applications could facilitate daily practice, reduce the risk of error and increase the precision of medical decision-making. Bioinformatics must evolve in accordance with ethical considerations and requires multidisciplinary collaboration. Its application depends on a sound technical foundation that meets strict quality requirements.

  15. [Bioinformatics: a key role in oncology].

    PubMed

    Olivier, Timothée; Chappuis, Pierre; Tsantoulis, Petros

    2016-05-18

    Bioinformatics is essential in clinical oncology and research. Combining biology, computer science and mathematics, bioinformatics aims to derive useful information from clinical and biological data, often poorly structured, at a large scale. Bioinformatics approaches have reclassified certain cancers based on their molecular and biological presentation, improving treatment selection. Many molecular signatures have been developed and, after validation, some are now usable in clinical practice. Other applications could facilitate daily practice, reduce the risk of error and increase the precision of medical decision-making. Bioinformatics must evolve in accordance with ethical considerations and requires multidisciplinary collaboration. Its application depends on a sound technical foundation that meets strict quality requirements. PMID:27424424

  16. Bioinformatics for personal genome interpretation.

    PubMed

    Capriotti, Emidio; Nehrt, Nathan L; Kann, Maricel G; Bromberg, Yana

    2012-07-01

    An international consortium released the first draft sequence of the human genome 10 years ago. Although the analysis of this data has suggested the genetic underpinnings of many diseases, we have not yet been able to fully quantify the relationship between genotype and phenotype. Thus, a major current effort of the scientific community focuses on evaluating individual predispositions to specific phenotypic traits given their genetic backgrounds. Many resources aim to identify and annotate the specific genes responsible for the observed phenotypes. Some of these use intra-species genetic variability as a means for better understanding this relationship. In addition, several online resources are now dedicated to collecting single nucleotide variants and other types of variants, and annotating their functional effects and associations with phenotypic traits. This information has enabled researchers to develop bioinformatics tools to analyze the rapidly increasing amount of newly extracted variation data and to predict the effect of uncharacterized variants. In this work, we review the most important developments in the field--the databases and bioinformatics tools that will be of utmost importance in our concerted effort to interpret the human variome.

  17. An approach to regional wetland digital elevation model development using a differential global positioning system and a custom-built helicopter-based surveying system

    USGS Publications Warehouse

    Jones, J.W.; Desmond, G.B.; Henkle, C.; Glover, R.

    2012-01-01

    Accurate topographic data are critical to restoration science and planning for the Everglades region of South Florida, USA. They are needed to monitor and simulate water level, water depth and hydroperiod and are used in scientific research on hydrologic and biologic processes. Because large wetland environments and data acquisition challenge conventional ground-based and remotely sensed data collection methods, the United States Geological Survey (USGS) adapted a classical data collection instrument to global positioning system (GPS) and geographic information system (GIS) technologies. Data acquired with this instrument were processed using geostatistics to yield sub-water level elevation values with centimetre accuracy (??15 cm). The developed database framework, modelling philosophy and metadata protocol allow for continued, collaborative model revision and expansion, given additional elevation or other ancillary data. ?? 2012 Taylor & Francis.

  18. Smart built-in test

    NASA Technical Reports Server (NTRS)

    Richards, Dale W.

    1990-01-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  19. Genomics and Bioinformatics Resources for Crop Improvement

    PubMed Central

    Mochida, Keiichi; Shinozaki, Kazuo

    2010-01-01

    Recent remarkable innovations in platforms for omics-based research and application development provide crucial resources to promote research in model and applied plant species. A combinatorial approach using multiple omics platforms and integration of their outcomes is now an effective strategy for clarifying molecular systems integral to improving plant productivity. Furthermore, promotion of comparative genomics among model and applied plants allows us to grasp the biological properties of each species and to accelerate gene discovery and functional analyses of genes. Bioinformatics platforms and their associated databases are also essential for the effective design of approaches making the best use of genomic resources, including resource integration. We review recent advances in research platforms and resources in plant omics together with related databases and advances in technology. PMID:20208064

  20. Rapid Development of Bioinformatics Education in China

    ERIC Educational Resources Information Center

    Zhong, Yang; Zhang, Xiaoyan; Ma, Jian; Zhang, Liang

    2003-01-01

    As the Human Genome Project experiences remarkable success and a flood of biological data is produced, bioinformatics becomes a very "hot" cross-disciplinary field, yet experienced bioinformaticians are urgently needed worldwide. This paper summarises the rapid development of bioinformatics education in China, especially related undergraduate…

  1. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Cancer.gov

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  2. Biology in 'silico': The Bioinformatics Revolution.

    ERIC Educational Resources Information Center

    Bloom, Mark

    2001-01-01

    Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…

  3. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  4. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    ERIC Educational Resources Information Center

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR RLK) genetic…

  5. The 2016 Bioinformatics Open Source Conference (BOSC)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science. PMID:27781083

  6. Nuclear reactors built, being built, or planned, 1991

    SciTech Connect

    Simpson, B.

    1992-07-01

    This document contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1991. The book is divided into three major sections: Section 1 consists of a reactor locator map and reactor tables; Section 2 includes nuclear reactors that are operating, being built, or planned; and Section 3 includes reactors that have been shut down permanently or dismantled. Sections 2 and 3 contain the following classification of reactors: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is an American company -- working either independently or in cooperation with a foreign company (Part 4, in each section). Critical assembly refers to an assembly of fuel and assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  7. Nuclear reactors built, being built, or planned 1996

    SciTech Connect

    1997-08-01

    This publication contains unclassified information about facilities, built, being built, or planned in the United States for domestic use or export as of December 31, 1996. The Office of Scientific and Technical Information, U.S. Department of Energy, gathers this information annually from Washington headquarters, and field offices of DOE; from the U.S. Nuclear Regulatory Commission (NRC); from the U. S. reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from U.S. and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled.

  8. Incorporating Genomics and Bioinformatics across the Life Sciences Curriculum

    SciTech Connect

    Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.

    2011-08-01

    courses or independent research projects requires infrastructure for organizing and assessing student work. Here, we present a new platform for faculty to keep current with the rapidly changing field of bioinformatics, the Integrated Microbial Genomes Annotation Collaboration Toolkit (IMG-ACT). It was developed by instructors from both research-intensive and predominately undergraduate institutions in collaboration with the Department of Energy-Joint Genome Institute (DOE-JGI) as a means to innovate and update undergraduate education and faculty development. The IMG-ACT program provides a cadre of tools, including access to a clearinghouse of genome sequences, bioinformatics databases, data storage, instructor course management, and student notebooks for organizing the results of their bioinformatic investigations. In the process, IMG-ACT makes it feasible to provide undergraduate research opportunities to a greater number and diversity of students, in contrast to the traditional mentor-to-student apprenticeship model for undergraduate research, which can be too expensive and time-consuming to provide for every undergraduate. The IMG-ACT serves as the hub for the network of faculty and students that use the system for microbial genome analysis. Open access of the IMG-ACT infrastructure to participating schools ensures that all types of higher education institutions can utilize it. With the infrastructure in place, faculty can focus their efforts on the pedagogy of bioinformatics, involvement of students in research, and use of this tool for their own research agenda. What the original faculty members of the IMG-ACT development team present here is an overview of how the IMG-ACT program has affected our development in terms of teaching and research with the hopes that it will inspire more faculty to get involved.

  9. A quick guide for building a successful bioinformatics community.

    PubMed

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D; Fuller, Jonathan C; Goecks, Jeremy; Mulder, Nicola J; Michaut, Magali; Ouellette, B F Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-02-01

    "Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB).

  10. Extending Asia Pacific bioinformatics into new realms in the "-omics" era.

    PubMed

    Ranganathan, Shoba; Eisenhaber, Frank; Tong, Joo Chuan; Tan, Tin Wee

    2009-12-03

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation dating back to 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 7-11, 2009 at Biopolis, Singapore. Besides bringing together scientists from the field of bioinformatics in this region, InCoB has actively engaged clinicians and researchers from the area of systems biology, to facilitate greater synergy between these two groups. InCoB2009 followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India), Hong Kong and Taipei (Taiwan), with InCoB2010 scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. The Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and symposia on Clinical Bioinformatics (CBAS), the Singapore Symposium on Computational Biology (SYMBIO) and training tutorials were scheduled prior to the scientific meeting, and provided ample opportunity for in-depth learning and special interest meetings for educators, clinicians and students. We provide a brief overview of the peer-reviewed bioinformatics manuscripts accepted for publication in this supplement, grouped into thematic areas. In order to facilitate scientific reproducibility and accountability, we have, for the first time, introduced minimum information criteria for our pubilcations, including compliance to a Minimum Information about a Bioinformatics Investigation (MIABi). As the regional research expertise in bioinformatics matures, we have delineated a minimum set of bioinformatics skills required for addressing the computational challenges of the "-omics" era.

  11. Extending Asia Pacific bioinformatics into new realms in the "-omics" era

    PubMed Central

    2009-01-01

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation dating back to 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 7-11, 2009 at Biopolis, Singapore. Besides bringing together scientists from the field of bioinformatics in this region, InCoB has actively engaged clinicians and researchers from the area of systems biology, to facilitate greater synergy between these two groups. InCoB2009 followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India), Hong Kong and Taipei (Taiwan), with InCoB2010 scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. The Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and symposia on Clinical Bioinformatics (CBAS), the Singapore Symposium on Computational Biology (SYMBIO) and training tutorials were scheduled prior to the scientific meeting, and provided ample opportunity for in-depth learning and special interest meetings for educators, clinicians and students. We provide a brief overview of the peer-reviewed bioinformatics manuscripts accepted for publication in this supplement, grouped into thematic areas. In order to facilitate scientific reproducibility and accountability, we have, for the first time, introduced minimum information criteria for our pubilcations, including compliance to a Minimum Information about a Bioinformatics Investigation (MIABi). As the regional research expertise in bioinformatics matures, we have delineated a minimum set of bioinformatics skills required for addressing the computational challenges of the "-omics" era. PMID:19958472

  12. Computational biology and bioinformatics in Nigeria.

    PubMed

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  13. Computational Biology and Bioinformatics in Nigeria

    PubMed Central

    Fatumo, Segun A.; Adoga, Moses P.; Ojo, Opeolu O.; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-01-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries. PMID:24763310

  14. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for

  15. Bioclipse: an open source workbench for chemo- and bioinformatics

    PubMed Central

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl ES

    2007-01-01

    Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at . PMID:17316423

  16. When cloud computing meets bioinformatics: a review.

    PubMed

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  17. ESF AS-BUILT CONFIGURATION

    SciTech Connect

    NA

    2005-03-17

    The calculations contained in this document were developed by the ''Mining Group of the Design & Engineering Organization'' and are intended solely for the use of the ''Design & Engineering Organization'' in its work regarding the subsurface repository. Yucca Mountain Project personnel from the ''Mining Group'' should be consulted before use of the calculations for purposes other than those stated herein or use by individuals other than authorized personnel in the ''Design & Engineering Organization''. The purpose of this calculation is to provide design inputs that can be used to develop an as-built drawing of the Exploratory Studies Facility (ESF) for the planning and development of the subsurface repository. This document includes subsurface as-built surveys, recommendation to complete as-built surveys, and Management and Operating Contractor (M&O) Subsurface Design Drawings as inputs. This calculation is used to provide data and information for an as-built ESF subsurface drawing and is not used in the development of results or conclusions, therefore all inputs are considered as indirect.

  18. Schools Built with Fallout Shelter.

    ERIC Educational Resources Information Center

    Office of Civil Defense (DOD), Washington, DC.

    Fallout protection can be built into a school building with little or no additional cost, using areas that are in continual use in the normal functioning of the building. A general discussion of the principles of shelter design is given along with photographs, descriptions, drawings, and cost analysis for a number of recently constructed schools…

  19. Bioinformatic challenges in targeted proteomics.

    PubMed

    Reker, Daniel; Malmström, Lars

    2012-09-01

    Selected reaction monitoring mass spectrometry is an emerging targeted proteomics technology that allows for the investigation of complex protein samples with high sensitivity and efficiency. It requires extensive knowledge about the sample for the many parameters needed to carry out the experiment to be set appropriately. Most studies today rely on parameter estimation from prior studies, public databases, or from measuring synthetic peptides. This is efficient and sound, but in absence of prior data, de novo parameter estimation is necessary. Computational methods can be used to create an automated framework to address this problem. However, the number of available applications is still small. This review aims at giving an orientation on the various bioinformatical challenges. To this end, we state the problems in classical machine learning and data mining terms, give examples of implemented solutions and provide some room for alternatives. This will hopefully lead to an increased momentum for the development of algorithms and serve the needs of the community for computational methods. We note that the combination of such methods in an assisted workflow will ease both the usage of targeted proteomics in experimental studies as well as the further development of computational approaches. PMID:22866949

  20. Nuclear reactors built, being built, or planned: 1995

    SciTech Connect

    1996-08-01

    This report contains unclassified information about facilities built, being built, or planned in the US for domestic use or export as of December 31, 1995. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is a US company--working either independently or in cooperation with a foreign company (Part 4). Critical assembly refers to an assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  1. Nuclear reactors built, being built, or planned, 1994

    SciTech Connect

    1995-07-01

    This document contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1994. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; tables of data for reactors operating, being built, or planned; and tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is a US company -- working either independently or in cooperation with a foreign company (Part 4). Critical assembly refers to an assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  2. Evolution in bioinformatic resources: 2009 update on the Bioinformatics Links Directory.

    PubMed

    Brazas, Michelle D; Yamada, Joseph Tadashi; Ouellette, B F Francis

    2009-07-01

    All of the life science research web servers published in this and previous issues of Nucleic Acids Research, together with other useful tools, databases and resources for bioinformatics and molecular biology research are freely accessible online through the Bioinformatics Links Directory, http://bioinformatics.ca/links_directory/. Entirely dependent on user feedback and community input, the Bioinformatics Links Directory exemplifies an open access research tool and resource. With 112 websites featured in the July 2009 Web Server Issue of Nucleic Acids Research, the 2009 update brings the total number of servers listed in the Bioinformatics Links Directory close to an impressive 1400 links. A complete list of all links listed in this Nucleic Acids Research 2009 Web Server Issue can be accessed online at http://bioinfomatics.ca/links_directory/narweb2009/. The 2009 update of the Bioinformatics Links Directory, which includes the Web Server list and summaries, is also available online at the Nucleic Acids Research website, http://nar.oxfordjournals.org/.

  3. Bioinformatics and its applications in plant biology.

    PubMed

    Rhee, Seung Yon; Dickerson, Julie; Xu, Dong

    2006-01-01

    Bioinformatics plays an essential role in today's plant science. As the amount of data grows exponentially, there is a parallel growth in the demand for tools and methods in data management, visualization, integration, analysis, modeling, and prediction. At the same time, many researchers in biology are unfamiliar with available bioinformatics methods, tools, and databases, which could lead to missed opportunities or misinterpretation of the information. In this review, we describe some of the key concepts, methods, software packages, and databases used in bioinformatics, with an emphasis on those relevant to plant science. We also cover some fundamental issues related to biological sequence analyses, transcriptome analyses, computational proteomics, computational metabolomics, bio-ontologies, and biological databases. Finally, we explore a few emerging research topics in bioinformatics.

  4. Bioinformatics Visualisation Tools: An Unbalanced Picture.

    PubMed

    Broască, Laura; Ancuşa, Versavia; Ciocârlie, Horia

    2016-01-01

    Visualization tools represent a key element in triggering human creativity while being supported with the analysis power of the machine. This paper analyzes free network visualization tools for bioinformatics, frames them in domain specific requirements and compares them. PMID:27577488

  5. Bioinformatics in Italy: BITS2011, the Eighth Annual Meeting of the Italian Society of Bioinformatics

    PubMed Central

    2012-01-01

    The BITS2011 meeting, held in Pisa on June 20-22, 2011, brought together more than 120 Italian researchers working in the field of Bioinformatics, as well as students in Bioinformatics, Computational Biology, Biology, Computer Sciences, and Engineering, representing a landscape of Italian bioinformatics research. This preface provides a brief overview of the meeting and introduces the peer-reviewed manuscripts that were accepted for publication in this Supplement. PMID:22536954

  6. No-boundary thinking in bioinformatics research

    PubMed Central

    2013-01-01

    Currently there are definitions from many agencies and research societies defining “bioinformatics” as deriving knowledge from computational analysis of large volumes of biological and biomedical data. Should this be the bioinformatics research focus? We will discuss this issue in this review article. We would like to promote the idea of supporting human-infrastructure (HI) with no-boundary thinking (NT) in bioinformatics (HINT). PMID:24192339

  7. Built Environment Wind Turbine Roadmap

    SciTech Connect

    Smith, J.; Forsyth, T.; Sinclair, K.; Oteri, F.

    2012-11-01

    The market currently encourages BWT deployment before the technology is ready for full-scale commercialization. To address this issue, industry stakeholders convened a Rooftop and Built-Environment Wind Turbine Workshop on August 11 - 12, 2010, at the National Wind Technology Center, located at the U.S. Department of Energy’s National Renewable Energy Laboratory in Boulder, Colorado. This report summarizes the workshop.

  8. Response of mollusc assemblages to climate variability and anthropogenic activities: a 4000-year record from a shallow bar-built lagoon system.

    PubMed

    Cerrato, Robert M; Locicero, Philip V; Goodbred, Steven L

    2013-10-01

    With their position at the interface between land and ocean and their fragile nature, lagoons are sensitive to environmental change, and it is reasonable to expect these changes would be recorded in well-preserved taxa such as molluscs. To test this, the 4000-year history of molluscs in Great South Bay, a bar-built lagoon, was reconstructed from 24 vibracores. Using x-radiography to identify shell layers, faunal counts, shell condition, organic content, and sediment type were measured in 325 samples. Sample age was estimated by interpolating 40 radiocarbon dates. K-means cluster analysis identified three molluscan assemblages, corresponding to sand-associated and mud-associated groups, and the third associated with inlet areas. Redundancy and regression tree analyses indicated that significant transitions from the sand-associated to mud-associated assemblage occurred over large portions of the bay about 650 and 294 years bp. The first date corresponds to the transition from the Medieval Warm Period to the Little Ice Age; this change in climate reduced the frequency of strong storms, likely leading to reduced barrier island breaching, greater bay enclosure, and fine-grained sediment accumulation. The second date marks the initiation of clear cutting by European settlers, an activity that would have increased runoff of fine-grained material. The occurrence of the inlet assemblage in the western and eastern ends of the bay is consistent with a history of inlets in these areas, even though prior to Hurricane Sandy in 2012, no inlet was present in the eastern bay in almost 200 years. The mud dominant, Mulinia lateralis, is a bivalve often associated with environmental disturbances. Its increased frequency over the past 300 years suggests that disturbances are more common in the bay than in the past. Management activities maintaining the current barrier island state may be contributing to the sand-mud transition and to the bay's susceptibility to disturbances.

  9. Response of mollusc assemblages to climate variability and anthropogenic activities: a 4000-year record from a shallow bar-built lagoon system.

    PubMed

    Cerrato, Robert M; Locicero, Philip V; Goodbred, Steven L

    2013-10-01

    With their position at the interface between land and ocean and their fragile nature, lagoons are sensitive to environmental change, and it is reasonable to expect these changes would be recorded in well-preserved taxa such as molluscs. To test this, the 4000-year history of molluscs in Great South Bay, a bar-built lagoon, was reconstructed from 24 vibracores. Using x-radiography to identify shell layers, faunal counts, shell condition, organic content, and sediment type were measured in 325 samples. Sample age was estimated by interpolating 40 radiocarbon dates. K-means cluster analysis identified three molluscan assemblages, corresponding to sand-associated and mud-associated groups, and the third associated with inlet areas. Redundancy and regression tree analyses indicated that significant transitions from the sand-associated to mud-associated assemblage occurred over large portions of the bay about 650 and 294 years bp. The first date corresponds to the transition from the Medieval Warm Period to the Little Ice Age; this change in climate reduced the frequency of strong storms, likely leading to reduced barrier island breaching, greater bay enclosure, and fine-grained sediment accumulation. The second date marks the initiation of clear cutting by European settlers, an activity that would have increased runoff of fine-grained material. The occurrence of the inlet assemblage in the western and eastern ends of the bay is consistent with a history of inlets in these areas, even though prior to Hurricane Sandy in 2012, no inlet was present in the eastern bay in almost 200 years. The mud dominant, Mulinia lateralis, is a bivalve often associated with environmental disturbances. Its increased frequency over the past 300 years suggests that disturbances are more common in the bay than in the past. Management activities maintaining the current barrier island state may be contributing to the sand-mud transition and to the bay's susceptibility to disturbances. PMID

  10. Data Mining for Grammatical Inference with Bioinformatics Criteria

    NASA Astrophysics Data System (ADS)

    López, Vivian F.; Aguilar, Ramiro; Alonso, Luis; Moreno, María N.; Corchado, Juan M.

    In this paper we describe both theoretical and practical results of a novel data mining process that combines hybrid techniques of association analysis and classical sequentiation algorithms of genomics to generate grammatical structures of a specific language. We used an application of a compilers generator system that allows the development of a practical application within the area of grammarware, where the concepts of the language analysis are applied to other disciplines, such as Bioinformatic. The tool allows the complexity of the obtained grammar to be measured automatically from textual data. A technique of incremental discovery of sequential patterns is presented to obtain simplified production rules, and compacted with bioinformatics criteria to make up a grammar.

  11. Regulatory bioinformatics for food and drug safety.

    PubMed

    Healy, Marion J; Tong, Weida; Ostroff, Stephen; Eichler, Hans-Georg; Patak, Alex; Neuspiel, Margaret; Deluyker, Hubert; Slikker, William

    2016-10-01

    "Regulatory Bioinformatics" strives to develop and implement a standardized and transparent bioinformatic framework to support the implementation of existing and emerging technologies in regulatory decision-making. It has great potential to improve public health through the development and use of clinically important medical products and tools to manage the safety of the food supply. However, the application of regulatory bioinformatics also poses new challenges and requires new knowledge and skill sets. In the latest Global Coalition on Regulatory Science Research (GCRSR) governed conference, Global Summit on Regulatory Science (GSRS2015), regulatory bioinformatics principles were presented with respect to global trends, initiatives and case studies. The discussion revealed that datasets, analytical tools, skills and expertise are rapidly developing, in many cases via large international collaborative consortia. It also revealed that significant research is still required to realize the potential applications of regulatory bioinformatics. While there is significant excitement in the possibilities offered by precision medicine to enhance treatments of serious and/or complex diseases, there is a clear need for further development of mechanisms to securely store, curate and share data, integrate databases, and standardized quality control and data analysis procedures. A greater understanding of the biological significance of the data is also required to fully exploit vast datasets that are becoming available. The application of bioinformatics in the microbiological risk analysis paradigm is delivering clear benefits both for the investigation of food borne pathogens and for decision making on clinically important treatments. It is recognized that regulatory bioinformatics will have many beneficial applications by ensuring high quality data, validated tools and standardized processes, which will help inform the regulatory science community of the requirements

  12. Regulatory bioinformatics for food and drug safety.

    PubMed

    Healy, Marion J; Tong, Weida; Ostroff, Stephen; Eichler, Hans-Georg; Patak, Alex; Neuspiel, Margaret; Deluyker, Hubert; Slikker, William

    2016-10-01

    "Regulatory Bioinformatics" strives to develop and implement a standardized and transparent bioinformatic framework to support the implementation of existing and emerging technologies in regulatory decision-making. It has great potential to improve public health through the development and use of clinically important medical products and tools to manage the safety of the food supply. However, the application of regulatory bioinformatics also poses new challenges and requires new knowledge and skill sets. In the latest Global Coalition on Regulatory Science Research (GCRSR) governed conference, Global Summit on Regulatory Science (GSRS2015), regulatory bioinformatics principles were presented with respect to global trends, initiatives and case studies. The discussion revealed that datasets, analytical tools, skills and expertise are rapidly developing, in many cases via large international collaborative consortia. It also revealed that significant research is still required to realize the potential applications of regulatory bioinformatics. While there is significant excitement in the possibilities offered by precision medicine to enhance treatments of serious and/or complex diseases, there is a clear need for further development of mechanisms to securely store, curate and share data, integrate databases, and standardized quality control and data analysis procedures. A greater understanding of the biological significance of the data is also required to fully exploit vast datasets that are becoming available. The application of bioinformatics in the microbiological risk analysis paradigm is delivering clear benefits both for the investigation of food borne pathogens and for decision making on clinically important treatments. It is recognized that regulatory bioinformatics will have many beneficial applications by ensuring high quality data, validated tools and standardized processes, which will help inform the regulatory science community of the requirements

  13. Bioinformatics process management: information flow via a computational journal

    PubMed Central

    Feagan, Lance; Rohrer, Justin; Garrett, Alexander; Amthauer, Heather; Komp, Ed; Johnson, David; Hock, Adam; Clark, Terry; Lushington, Gerald; Minden, Gary; Frost, Victor

    2007-01-01

    This paper presents the Bioinformatics Computational Journal (BCJ), a framework for conducting and managing computational experiments in bioinformatics and computational biology. These experiments often involve series of computations, data searches, filters, and annotations which can benefit from a structured environment. Systems to manage computational experiments exist, ranging from libraries with standard data models to elaborate schemes to chain together input and output between applications. Yet, although such frameworks are available, their use is not widespread–ad hoc scripts are often required to bind applications together. The BCJ explores another solution to this problem through a computer based environment suitable for on-site use, which builds on the traditional laboratory notebook paradigm. It provides an intuitive, extensible paradigm designed for expressive composition of applications. Extensive features facilitate sharing data, computational methods, and entire experiments. By focusing on the bioinformatics and computational biology domain, the scope of the computational framework was narrowed, permitting us to implement a capable set of features for this domain. This report discusses the features determined critical by our system and other projects, along with design issues. We illustrate the use of our implementation of the BCJ on two domain-specific examples. PMID:18053179

  14. New design incinerator being built

    SciTech Connect

    Not Available

    1980-09-01

    A $14 million garbage-burning facility is being built by Reedy Creek Utilities Co. in cooperation with DOE at Lake Buena Vista, Fla., on the edge of Walt Disney World. The nation's first large-volume slagging pyrolysis incinerator will burn municipal waste in a more beneficial way and supply 15% of the amusement park's energy demands. By studying the new incinerators slag-producing capabilities, engineers hope to design similar facilities for isolating low-level nuclear wastes in inert, rocklike slag.

  15. Providing web servers and training in Bioinformatics: 2010 update on the Bioinformatics Links Directory.

    PubMed

    Brazas, Michelle D; Yamada, Joseph T; Ouellette, B F Francis

    2010-07-01

    The Links Directory at Bioinformatics.ca continues its collaboration with Nucleic Acids Research to jointly publish and compile a freely accessible, online collection of tools, databases and resource materials for bioinformatics and molecular biology research. The July 2010 Web Server issue of Nucleic Acids Research adds an additional 115 web server tools and 7 updates to the directory at http://bioinformatics.ca/links_directory/, bringing the total number of servers listed close to an impressive 1500 links. The Bioinformatics Links Directory represents an excellent community resource for locating bioinformatic tools and databases to aid one's research, and in this context bioinformatic education needs and initiatives are discussed. A complete list of all links featured in this Nucleic Acids Research 2010 Web Server issue can be accessed online at http://bioinformatics.ca/links_directory/narweb2010/. The 2010 update of the Bioinformatics Links Directory, which includes the Web Server list and summaries, is also available online at the Nucleic Acids Research website, http://nar.oxfordjournals.org/.

  16. Evolution of web services in bioinformatics.

    PubMed

    Neerincx, Pieter B T; Leunissen, Jack A M

    2005-06-01

    Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.

  17. Bioinformatics for analysis of poxvirus genomes.

    PubMed

    Da Silva, Melissa; Upton, Chris

    2012-01-01

    In recent years, there have been numerous unprecedented technological advances in the field of molecular biology; these include DNA sequencing, mass spectrometry of proteins, and microarray analysis of mRNA transcripts. Perhaps, however, it is the area of genomics, which has now generated the complete genome sequences of more than 100 poxviruses, that has had the greatest impact on the average virology researcher because the DNA sequence data is in constant use in many different ways by almost all molecular virologists. As this data resource grows, so does the importance of the availability of databases and software tools to enable the bench virologist to work with and make use of this (valuable/expensive) DNA sequence information. Thus, providing researchers with intuitive software to first select and reformat genomics data from large databases, second, to compare/analyze genomics data, and third, to view and interpret large and complex sets of results has become pivotal in enabling progress to be made in modern virology. This chapter is directed at the bench virologist and describes the software required for a number of common bioinformatics techniques that are useful for comparing and analyzing poxvirus genomes. In a number of examples, we also highlight the Viral Orthologous Clusters database system and integrated tools that we developed for the management and analysis of complete viral genomes.

  18. A Guide to Bioinformatics for Immunologists

    PubMed Central

    Whelan, Fiona J.; Yap, Nicholas V. L.; Surette, Michael G.; Golding, G. Brian; Bowdish, Dawn M. E.

    2013-01-01

    Bioinformatics includes a suite of methods, which are cheap, approachable, and many of which are easily accessible without any sort of specialized bioinformatic training. Yet, despite this, bioinformatic tools are under-utilized by immunologists. Herein, we review a representative set of publicly available, easy-to-use bioinformatic tools using our own research on an under-annotated human gene, SCARA3, as an example. SCARA3 shares an evolutionary relationship with the class A scavenger receptors, but preliminary research showed that it was divergent enough that its function remained unclear. In our quest for more information about this gene – did it share gene sequence similarities to other scavenger receptors? Did it contain conserved protein domains? Where was it expressed in the human body? – we discovered the power and informative potential of publicly available bioinformatic tools designed for the novice in mind, which allowed us to hypothesize on the regulation, structure, and function of this protein. We argue that these tools are largely applicable to many facets of immunology research. PMID:24363654

  19. A Quick Guide for Building a Successful Bioinformatics Community

    PubMed Central

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-01-01

    “Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371

  20. A quick guide for building a successful bioinformatics community.

    PubMed

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D; Fuller, Jonathan C; Goecks, Jeremy; Mulder, Nicola J; Michaut, Magali; Ouellette, B F Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-02-01

    "Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371

  1. Functional informatics: convergence and integration of automation and bioinformatics.

    PubMed

    Ilyin, Sergey E; Bernal, Alejandro; Horowitz, Daniel; Derian, Claudia K; Xin, Hong

    2004-09-01

    The biopharmaceutical industry is currently being presented with opportunities to improve research and business efficiency via automation and the integration of various systems. In the examples discussed, industrial high-throughput screening systems are integrated with functional tools and bioinformatics to facilitate target and biomarker identification and validation. These integrative functional approaches generate value-added opportunities by leveraging available automation and information technologies into new applications that are broadly applicable to different types of projects, and by improving the overall research and development and business efficiency via the integration of various systems.

  2. Bioinformatics: A History of Evolution "In Silico"

    ERIC Educational Resources Information Center

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  3. Bioinformatics in Undergraduate Education: Practical Examples

    ERIC Educational Resources Information Center

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  4. Bioboxes: standardised containers for interchangeable bioinformatics software.

    PubMed

    Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D

    2015-01-01

    Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.

  5. Bioboxes: standardised containers for interchangeable bioinformatics software.

    PubMed

    Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D

    2015-01-01

    Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable. PMID:26473029

  6. Implementing bioinformatic workflows within the bioextract server

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  7. "Extreme Programming" in a Bioinformatics Class

    ERIC Educational Resources Information Center

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP). The…

  8. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    ERIC Educational Resources Information Center

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  9. 2010 Translational bioinformatics year in review

    PubMed Central

    Miller, Katharine S

    2011-01-01

    A review of 2010 research in translational bioinformatics provides much to marvel at. We have seen notable advances in personal genomics, pharmacogenetics, and sequencing. At the same time, the infrastructure for the field has burgeoned. While acknowledging that, according to researchers, the members of this field tend to be overly optimistic, the authors predict a bright future. PMID:21672905

  10. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    EPA Science Inventory

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  11. Navigating the changing learning landscape: perspective from bioinformatics.ca

    PubMed Central

    Ouellette, B. F. Francis

    2013-01-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs. PMID:23515468

  12. The 2011 Bioinformatics Links Directory update: more resources, tools and databases and features to empower the bioinformatics community.

    PubMed

    Brazas, Michelle D; Yim, David S; Yamada, Joseph T; Ouellette, B F Francis

    2011-07-01

    The Bioinformatics Links Directory continues its collaboration with Nucleic Acids Research to collaboratively publish and compile a freely accessible, online collection of tools, databases and resource materials for bioinformatics and molecular biology research. The July 2011 Web Server issue of Nucleic Acids Research adds an additional 78 web server tools and 14 updates to the directory at http://bioinformatics.ca/links_directory/.

  13. Composable languages for bioinformatics: the NYoSh experiment

    PubMed Central

    Simi, Manuele

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is

  14. Composable languages for bioinformatics: the NYoSh experiment.

    PubMed

    Simi, Manuele; Campagne, Fabien

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is

  15. Composable languages for bioinformatics: the NYoSh experiment.

    PubMed

    Simi, Manuele; Campagne, Fabien

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is

  16. Bioinformatic characterization of plant networks

    SciTech Connect

    McDermott, Jason E.; Samudrala, Ram

    2008-06-30

    Cells and organisms are governed by networks of interactions, genetic, physical and metabolic. Large-scale experimental studies of interactions between components of biological systems have been performed for a variety of eukaryotic organisms. However, there is a dearth of such data for plants. Computational methods for prediction of relationships between proteins, primarily based on comparative genomics, provide a useful systems-level view of cellular functioning and can be used to extend information about other eukaryotes to plants. We have predicted networks for Arabidopsis thaliana, Oryza sativa indica and japonica and several plant pathogens using the Bioverse (http://bioverse.compbio.washington.edu) and show that they are similar to experimentally-derived interaction networks. Predicted interaction networks for plants can be used to provide novel functional annotations and predictions about plant phenotypes and aid in rational engineering of biosynthesis pathways.

  17. Bioinformatics approaches to cancer gene discovery.

    PubMed

    Narayanan, Ramaswamy

    2007-01-01

    The Cancer Gene Anatomy Project (CGAP) database of the National Cancer Institute has thousands of known and novel expressed sequence tags (ESTs). These ESTs, derived from diverse normal and tumor cDNA libraries, offer an attractive starting point for cancer gene discovery. Data-mining the CGAP database led to the identification of ESTs that were predicted to be specific to select solid tumors. Two genes from these efforts were taken to proof of concept for diagnostic and therapeutics indications of cancer. Microarray technology was used in conjunction with bioinformatics to understand the mechanism of one of the targets discovered. These efforts provide an example of gene discovery by using bioinformatics approaches. The strengths and weaknesses of this approach are discussed in this review.

  18. Machine learning: an indispensable tool in bioinformatics.

    PubMed

    Inza, Iñaki; Calvo, Borja; Armañanzas, Rubén; Bengoetxea, Endika; Larrañaga, Pedro; Lozano, José A

    2010-01-01

    The increase in the number and complexity of biological databases has raised the need for modern and powerful data analysis tools and techniques. In order to fulfill these requirements, the machine learning discipline has become an everyday tool in bio-laboratories. The use of machine learning techniques has been extended to a wide spectrum of bioinformatics applications. It is broadly used to investigate the underlying mechanisms and interactions between biological molecules in many diseases, and it is an essential tool in any biomarker discovery process. In this chapter, we provide a basic taxonomy of machine learning algorithms, and the characteristics of main data preprocessing, supervised classification, and clustering techniques are shown. Feature selection, classifier evaluation, and two supervised classification topics that have a deep impact on current bioinformatics are presented. We make the interested reader aware of a set of popular web resources, open source software tools, and benchmarking data repositories that are frequently used by the machine learning community. PMID:19957143

  19. Bioinformatics Pipeline for Transcriptome Sequencing Analysis.

    PubMed

    Djebali, Sarah; Wucher, Valentin; Foissac, Sylvain; Hitte, Christophe; Corre, Evan; Derrien, Thomas

    2017-01-01

    The development of High Throughput Sequencing (HTS) for RNA profiling (RNA-seq) has shed light on the diversity of transcriptomes. While RNA-seq is becoming a de facto standard for monitoring the population of expressed transcripts in a given condition at a specific time, processing the huge amount of data it generates requires dedicated bioinformatics programs. Here, we describe a standard bioinformatics protocol using state-of-the-art tools, the STAR mapper to align reads onto a reference genome, Cufflinks to reconstruct the transcriptome, and RSEM to quantify expression levels of genes and transcripts. We present the workflow using human transcriptome sequencing data from two biological replicates of the K562 cell line produced as part of the ENCODE3 project. PMID:27662878

  20. A toolbox for developing bioinformatics software

    PubMed Central

    Potrzebowski, Wojciech; Puton, Tomasz; Rother, Magdalena; Wywial, Ewa; Bujnicki, Janusz M.

    2012-01-01

    Creating useful software is a major activity of many scientists, including bioinformaticians. Nevertheless, software development in an academic setting is often unsystematic, which can lead to problems associated with maintenance and long-term availibility. Unfortunately, well-documented software development methodology is difficult to adopt, and technical measures that directly improve bioinformatic programming have not been described comprehensively. We have examined 22 software projects and have identified a set of practices for software development in an academic environment. We found them useful to plan a project, support the involvement of experts (e.g. experimentalists), and to promote higher quality and maintainability of the resulting programs. This article describes 12 techniques that facilitate a quick start into software engineering. We describe 3 of the 22 projects in detail and give many examples to illustrate the usage of particular techniques. We expect this toolbox to be useful for many bioinformatics programming projects and to the training of scientific programmers. PMID:21803787

  1. Discovery and Classification of Bioinformatics Web Services

    SciTech Connect

    Rocco, D; Critchlow, T

    2002-09-02

    The transition of the World Wide Web from a paradigm of static Web pages to one of dynamic Web services provides new and exciting opportunities for bioinformatics with respect to data dissemination, transformation, and integration. However, the rapid growth of bioinformatics services, coupled with non-standardized interfaces, diminish the potential that these Web services offer. To face this challenge, we examine the notion of a Web service class that defines the functionality provided by a collection of interfaces. These descriptions are an integral part of a larger framework that can be used to discover, classify, and wrapWeb services automatically. We discuss how this framework can be used in the context of the proliferation of sites offering BLAST sequence alignment services for specialized data sets.

  2. [Applied problems of mathematical biology and bioinformatics].

    PubMed

    Lakhno, V D

    2011-01-01

    Mathematical biology and bioinformatics represent a new and rapidly progressing line of investigations which emerged in the course of work on the project "Human genome". The main applied problems of these sciences are grug design, patient-specific medicine and nanobioelectronics. It is shown that progress in the technology of mass sequencing of the human genome has set the stage for starting the national program on patient-specific medicine.

  3. An active registry for bioinformatics web services

    PubMed Central

    Pettifer, S.; Thorne, D.; McDermott, P.; Attwood, T.; Baran, J.; Bryne, J. C.; Hupponen, T.; Mowbray, D.; Vriend, G.

    2009-01-01

    Summary: The EMBRACE Registry is a web portal that collects and monitors web services according to test scripts provided by the their administrators. Users are able to search for, rank and annotate services, enabling them to select the most appropriate working service for inclusion in their bioinformatics analysis tasks. Availability and implementation: Web site implemented with PHP, Python, MySQL and Apache, with all major browsers supported. (www.embraceregistry.net) Contact: steve.pettifer@manchester.ac.uk PMID:19460889

  4. Broader incorporation of bioinformatics in education: opportunities and challenges.

    PubMed

    Cummings, Michael P; Temple, Glena G

    2010-11-01

    The major opportunities for broader incorporation of bioinformatics in education can be placed into three general categories: general applicability of bioinformatics in life science and related curricula; inherent fit of bioinformatics for promoting student learning in most biology programs; and the general experience and associated comfort students have with computers and technology. Conversely, the major challenges for broader incorporation of bioinformatics in education can be placed into three general categories: required infrastructure and logistics; instructor knowledge of bioinformatics and continuing education; and the breadth of bioinformatics, and the diversity of students and educational objectives. Broader incorporation of bioinformatics at all education levels requires overcoming the challenges to using transformative computer-requiring learning activities, assisting faculty in collecting assessment data on mastery of student learning outcomes, as well as creating more faculty development opportunities that span diverse skill levels, with an emphasis placed on providing resource materials that are kept up-to-date as the field and tools change.

  5. Quantum Bio-Informatics IV

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi; Freudenberg, Wolfgang; Ohya, Masanori

    2011-01-01

    The QP-DYN algorithms / L. Accardi, M. Regoli and M. Ohya -- Study of transcriptional regulatory network based on Cis module database / S. Akasaka ... [et al.] -- On Lie group-Lie algebra correspondences of unitary groups in finite von Neumann algebras / H. Ando, I. Ojima and Y. Matsuzawa -- On a general form of time operators of a Hamiltonian with purely discrete spectrum / A. Arai -- Quantum uncertainty and decision-making in game theory / M. Asano ... [et al.] -- New types of quantum entropies and additive information capacities / V. P. Belavkin -- Non-Markovian dynamics of quantum systems / D. Chruscinski and A. Kossakowski -- Self-collapses of quantum systems and brain activities / K.-H. Fichtner ... [et al.] -- Statistical analysis of random number generators / L. Accardi and M. Gabler -- Entangled effects of two consecutive pairs in residues and its use in alignment / T. Ham, K. Sato and M. Ohya -- The passage from digital to analogue in white noise analysis and applications / T. Hida -- Remarks on the degree of entanglement / D. Chruscinski ... [et al.] -- A completely discrete particle model derived from a stochastic partial differential equation by point systems / K.-H. Fichtner, K. Inoue and M. Ohya -- On quantum algorithm for exptime problem / S. Iriyama and M. Ohya -- On sufficient algebraic conditions for identification of quantum states / A. Jamiolkowski -- Concurrence and its estimations by entanglement witnesses / J. Jurkowski -- Classical wave model of quantum-like processing in brain / A. Khrennikov -- Entanglement mapping vs. quantum conditional probability operator / D. Chruscinski ... [et al.] -- Constructing multipartite entanglement witnesses / M. Michalski -- On Kadison-Schwarz property of quantum quadratic operators on M[symbol](C) / F. Mukhamedov and A. Abduganiev -- On phase transitions in quantum Markov chains on Cayley Tree / L. Accardi, F. Mukhamedov and M. Saburov -- Space(-time) emergence as symmetry breaking effect / I. Ojima

  6. A library-based bioinformatics services program*

    PubMed Central

    Yarfitz, Stuart; Ketchell, Debra S.

    2000-01-01

    Support for molecular biology researchers has been limited to traditional library resources and services in most academic health sciences libraries. The University of Washington Health Sciences Libraries have been providing specialized services to this user community since 1995. The library recruited a Ph.D. biologist to assess the molecular biological information needs of researchers and design strategies to enhance library resources and services. A survey of laboratory research groups identified areas of greatest need and led to the development of a three-pronged program: consultation, education, and resource development. Outcomes of this program include bioinformatics consultation services, library-based and graduate level courses, networking of sequence analysis tools, and a biological research Web site. Bioinformatics clients are drawn from diverse departments and include clinical researchers in need of tools that are not readily available outside of basic sciences laboratories. Evaluation and usage statistics indicate that researchers, regardless of departmental affiliation or position, require support to access molecular biology and genetics resources. Centralizing such services in the library is a natural synergy of interests and enhances the provision of traditional library resources. Successful implementation of a library-based bioinformatics program requires both subject-specific and library and information technology expertise. PMID:10658962

  7. Bioinformatics on the Cloud Computing Platform Azure

    PubMed Central

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  8. Bioinformatics tools for analysing viral genomic data.

    PubMed

    Orton, R J; Gu, Q; Hughes, J; Maabar, M; Modha, S; Vattipally, S B; Wilkie, G S; Davison, A J

    2016-04-01

    The field of viral genomics and bioinformatics is experiencing a strong resurgence due to high-throughput sequencing (HTS) technology, which enables the rapid and cost-effective sequencing and subsequent assembly of large numbers of viral genomes. In addition, the unprecedented power of HTS technologies has enabled the analysis of intra-host viral diversity and quasispecies dynamics in relation to important biological questions on viral transmission, vaccine resistance and host jumping. HTS also enables the rapid identification of both known and potentially new viruses from field and clinical samples, thus adding new tools to the fields of viral discovery and metagenomics. Bioinformatics has been central to the rise of HTS applications because new algorithms and software tools are continually needed to process and analyse the large, complex datasets generated in this rapidly evolving area. In this paper, the authors give a brief overview of the main bioinformatics tools available for viral genomic research, with a particular emphasis on HTS technologies and their main applications. They summarise the major steps in various HTS analyses, starting with quality control of raw reads and encompassing activities ranging from consensus and de novo genome assembly to variant calling and metagenomics, as well as RNA sequencing.

  9. Bringing Web 2.0 to bioinformatics.

    PubMed

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  10. Bioinformatics Approach in Plant Genomic Research.

    PubMed

    Ong, Quang; Nguyen, Phuc; Thao, Nguyen Phuong; Le, Ly

    2016-08-01

    The advance in genomics technology leads to the dramatic change in plant biology research. Plant biologists now easily access to enormous genomic data to deeply study plant high-density genetic variation at molecular level. Therefore, fully understanding and well manipulating bioinformatics tools to manage and analyze these data are essential in current plant genome research. Many plant genome databases have been established and continued expanding recently. Meanwhile, analytical methods based on bioinformatics are also well developed in many aspects of plant genomic research including comparative genomic analysis, phylogenomics and evolutionary analysis, and genome-wide association study. However, constantly upgrading in computational infrastructures, such as high capacity data storage and high performing analysis software, is the real challenge for plant genome research. This review paper focuses on challenges and opportunities which knowledge and skills in bioinformatics can bring to plant scientists in present plant genomics era as well as future aspects in critical need for effective tools to facilitate the translation of knowledge from new sequencing data to enhancement of plant productivity. PMID:27499685

  11. Application of Bioinformatics in Chronobiology Research

    PubMed Central

    Lopes, Robson da Silva; Resende, Nathalia Maria; Honorio-França, Adenilda Cristina; França, Eduardo Luzía

    2013-01-01

    Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through “omics” projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research. PMID:24187519

  12. Bioinformatics strategies for the analysis of lipids.

    PubMed

    Wheelock, Craig E; Goto, Susumu; Yetukuri, Laxman; D'Alexandri, Fabio Luiz; Klukas, Christian; Schreiber, Falk; Oresic, Matej

    2009-01-01

    Owing to their importance in cellular physiology and pathology as well as to recent technological advances, the study of lipids has reemerged as a major research target. However, the structural diversity of lipids presents a number of analytical and informatics challenges. The field of lipidomics is a new postgenome discipline that aims to develop comprehensive methods for lipid analysis, necessitating concomitant developments in bioinformatics. The evolving research paradigm requires that new bioinformatics approaches accommodate genomic as well as high-level perspectives, integrating genome, protein, chemical and network information. The incorporation of lipidomics information into these data structures will provide mechanistic understanding of lipid functions and interactions in the context of cellular and organismal physiology. Accordingly, it is vital that specific bioinformatics methods be developed to analyze the wealth of lipid data being acquired. Herein, we present an overview of the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and application of its tools to the analysis of lipid data. We also describe a series of software tools and databases (KGML-ED, VANTED, MZmine, and LipidDB) that can be used for the processing of lipidomics data and biochemical pathway reconstruction, an important next step in the development of the lipidomics field.

  13. The House That Jones Built.

    ERIC Educational Resources Information Center

    Rist, Marilee C.

    1992-01-01

    Describes lifelong commitment of middle-school principal and major W.J. Jones to Coahoma, a small town in Mississippi Delta. Thanks to his efforts, town recently acquired a sewage system, blacktopped roads, and new housing (through Habitat for Humanity and World Vision). Although town elementary school fell victim to consolidation and children are…

  14. Advancing standards for bioinformatics activities: persistence, reproducibility, disambiguation and Minimum Information About a Bioinformatics investigation (MIABi).

    PubMed

    Tan, Tin Wee; Tong, Joo Chuan; Khan, Asif M; de Silva, Mark; Lim, Kuan Siong; Ranganathan, Shoba

    2010-12-02

    The 2010 International Conference on Bioinformatics, InCoB2010, which is the annual conference of the Asia-Pacific Bioinformatics Network (APBioNet) has agreed to publish conference papers in compliance with the proposed Minimum Information about a Bioinformatics investigation (MIABi), proposed in June 2009. Authors of the conference supplements in BMC Bioinformatics, BMC Genomics and Immunome Research have consented to cooperate in this process, which will include the procedures described herein, where appropriate, to ensure data and software persistence and perpetuity, database and resource re-instantiability and reproducibility of results, author and contributor identity disambiguation and MIABi-compliance. Wherever possible, datasets and databases will be submitted to depositories with standardized terminologies. As standards are evolving, this process is intended as a prelude to the 100 BioDatabases (BioDB100) initiative whereby APBioNet collaborators will contribute exemplar databases to demonstrate the feasibility of standards-compliance and participate in refining the process for peer-review of such publications and validation of scientific claims and standards compliance. This testbed represents another step in advancing standards-based processes in the bioinformatics community which is essential to the growing interoperability of biological data, information, knowledge and computational resources.

  15. Built for the road ahead.

    PubMed

    Mansour, Alexander

    2015-10-01

    Henry Ford Health System in Detroit is seeking new ways to lower and cover costs for the large, low-income population it serves in southeastern Michigan. Employing a strategy that couples the federal 340B Drug Pricing Program with a prescription assistance program of its own creation, Henry Ford has seen improvement in the following areas: Increased medication adherence. Reduced readmissions. Cost savings that are sufficient to expand services where expansion otherwise would not have been feasible. PMID:26595980

  16. Shared bioinformatics databases within the Unipro UGENE platform.

    PubMed

    Protsyuk, Ivan V; Grekhov, German A; Tiunov, Alexey V; Fursov, Mikhail Y

    2015-01-01

    Unipro UGENE is an open-source bioinformatics toolkit that integrates popular tools along with original instruments for molecular biologists within a unified user interface. Nowadays, most bioinformatics desktop applications, including UGENE, make use of a local data model while processing different types of data. Such an approach causes an inconvenience for scientists working cooperatively and relying on the same data. This refers to the need of making multiple copies of certain files for every workplace and maintaining synchronization between them in case of modifications. Therefore, we focused on delivering a collaborative work into the UGENE user experience. Currently, several UGENE installations can be connected to a designated shared database and users can interact with it simultaneously. Such databases can be created by UGENE users and be used at their discretion. Objects of each data type, supported by UGENE such as sequences, annotations, multiple alignments, etc., can now be easily imported from or exported to a remote storage. One of the main advantages of this system, compared to existing ones, is the almost simultaneous access of client applications to shared data regardless of their volume. Moreover, the system is capable of storing millions of objects. The storage itself is a regular database server so even an inexpert user is able to deploy it. Thus, UGENE may provide access to shared data for users located, for example, in the same laboratory or institution. UGENE is available at: http://ugene.net/download.html. PMID:26527191

  17. Teaching the bioinformatics of signaling networks: an integrated approach to facilitate multi-disciplinary learning.

    PubMed

    Korcsmaros, Tamas; Dunai, Zsuzsanna A; Vellai, Tibor; Csermely, Peter

    2013-09-01

    The number of bioinformatics tools and resources that support molecular and cell biology approaches is continuously expanding. Moreover, systems and network biology analyses are accompanied more and more by integrated bioinformatics methods. Traditional information-centered university teaching methods often fail, as (1) it is impossible to cover all existing approaches in the frame of a single course, and (2) a large segment of the current bioinformation can become obsolete in a few years. Signaling network offers an excellent example for teaching bioinformatics resources and tools, as it is both focused and complex at the same time. Here, we present an outline of a university bioinformatics course with four sample practices to demonstrate how signaling network studies can integrate biochemistry, genetics, cell biology and network sciences. We show that several bioinformatics resources and tools, as well as important concepts and current trends, can also be integrated to signaling network studies. The research-type hands-on experiences we show enable the students to improve key competences such as teamworking, creative and critical thinking and problem solving. Our classroom course curriculum can be re-formulated as an e-learning material or applied as a part of a specific training course. The multi-disciplinary approach and the mosaic setup of the course have the additional benefit to support the advanced teaching of talented students.

  18. First results for custom-built low-temperature (4.2 K) scanning tunneling microscope/molecular beam epitaxy and pulsed laser epitaxy system designed for spin-polarized measurements

    NASA Astrophysics Data System (ADS)

    Foley, Andrew; Alam, Khan; Lin, Wenzhi; Wang, Kangkang; Chinchore, Abhijit; Corbett, Joseph; Savage, Alan; Chen, Tianjiao; Shi, Meng; Pak, Jeongihm; Smith, Arthur

    2014-03-01

    A custom low-temperature (4.2 K) scanning tunneling microscope system has been developed which is combined directly with a custom molecular beam epitaxy facility (and also including pulsed laser epitaxy) for the purpose of studying surface nanomagnetism of complex spintronic materials down to the atomic scale. For purposes of carrying out spin-polarized STM measurements, the microscope is built into a split-coil, 4.5 Tesla superconducting magnet system where the magnetic field can be applied normal to the sample surface; since, as a result, the microscope does not include eddy current damping, vibration isolation is achieved using a unique combination of two stages of pneumatic isolators along with an acoustical noise shield, in addition to the use of a highly stable as well as modular `Pan'-style STM design with a high Q factor. First 4.2 K results reveal, with clear atomic resolution, various reconstructions on wurtzite GaN c-plane surfaces grown by MBE, including the c(6x12) on N-polar GaN(0001). Details of the system design and functionality will be presented.

  19. Bioinformatics for the synthetic biology of natural products: integrating across the Design–Build–Test cycle

    PubMed Central

    Currin, Andrew; Jervis, Adrian J.; Rattray, Nicholas J. W.; Swainston, Neil; Yan, Cunyu; Breitling, Rainer

    2016-01-01

    Covering: 2000 to 2016 Progress in synthetic biology is enabled by powerful bioinformatics tools allowing the integration of the design, build and test stages of the biological engineering cycle. In this review we illustrate how this integration can be achieved, with a particular focus on natural products discovery and production. Bioinformatics tools for the DESIGN and BUILD stages include tools for the selection, synthesis, assembly and optimization of parts (enzymes and regulatory elements), devices (pathways) and systems (chassis). TEST tools include those for screening, identification and quantification of metabolites for rapid prototyping. The main advantages and limitations of these tools as well as their interoperability capabilities are highlighted. PMID:27185383

  20. Built Environment Education in Art Education.

    ERIC Educational Resources Information Center

    Guilfoil, Joanne K., Ed.; Sandler, Alan R., Ed.

    This anthology brings the study of the built environment, its design, social and cultural functions, and the criticism thereof into focus. Following a preface and introduction, 22 essays are organized in three parts. Part 1 includes: (1) "Landscape Art and the Role of the Natural Environment in Built Environment Education" (Heather Anderson); (2)…

  1. Microbial bioinformatics for food safety and production

    PubMed Central

    Alkema, Wynand; Boekhorst, Jos; Wels, Michiel

    2016-01-01

    In the production of fermented foods, microbes play an important role. Optimization of fermentation processes or starter culture production traditionally was a trial-and-error approach inspired by expert knowledge of the fermentation process. Current developments in high-throughput ‘omics’ technologies allow developing more rational approaches to improve fermentation processes both from the food functionality as well as from the food safety perspective. Here, the authors thematically review typical bioinformatics techniques and approaches to improve various aspects of the microbial production of fermented food products and food safety. PMID:26082168

  2. Critical Issues in Bioinformatics and Computing

    PubMed Central

    Kesh, Someswa; Raghupathi, Wullianallur

    2004-01-01

    This article provides an overview of the field of bioinformatics and its implications for the various participants. Next-generation issues facing developers (programmers), users (molecular biologists), and the general public (patients) who would benefit from the potential applications are identified. The goal is to create awareness and debate on the opportunities (such as career paths) and the challenges such as privacy that arise. A triad model of the participants' roles and responsibilities is presented along with the identification of the challenges and possible solutions. PMID:18066389

  3. Translational Bioinformatics: Past, Present, and Future.

    PubMed

    Tenenbaum, Jessica D

    2016-02-01

    Though a relatively young discipline, translational bioinformatics (TBI) has become a key component of biomedical research in the era of precision medicine. Development of high-throughput technologies and electronic health records has caused a paradigm shift in both healthcare and biomedical research. Novel tools and methods are required to convert increasingly voluminous datasets into information and actionable knowledge. This review provides a definition and contextualization of the term TBI, describes the discipline's brief history and past accomplishments, as well as current foci, and concludes with predictions of future directions in the field.

  4. Translational Bioinformatics: Past, Present, and Future

    PubMed Central

    Tenenbaum, Jessica D.

    2016-01-01

    Though a relatively young discipline, translational bioinformatics (TBI) has become a key component of biomedical research in the era of precision medicine. Development of high-throughput technologies and electronic health records has caused a paradigm shift in both healthcare and biomedical research. Novel tools and methods are required to convert increasingly voluminous datasets into information and actionable knowledge. This review provides a definition and contextualization of the term TBI, describes the discipline’s brief history and past accomplishments, as well as current foci, and concludes with predictions of future directions in the field. PMID:26876718

  5. Multiobjective optimization in bioinformatics and computational biology.

    PubMed

    Handl, Julia; Kell, Douglas B; Knowles, Joshua

    2007-01-01

    This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.

  6. Teaching the ABCs of bioinformatics: a brief introduction to the Applied Bioinformatics Course

    PubMed Central

    2014-01-01

    With the development of the Internet and the growth of online resources, bioinformatics training for wet-lab biologists became necessary as a part of their education. This article describes a one-semester course ‘Applied Bioinformatics Course’ (ABC, http://abc.cbi.pku.edu.cn/) that the author has been teaching to biological graduate students at the Peking University and the Chinese Academy of Agricultural Sciences for the past 13 years. ABC is a hands-on practical course to teach students to use online bioinformatics resources to solve biological problems related to their ongoing research projects in molecular biology. With a brief introduction to the background of the course, detailed information about the teaching strategies of the course are outlined in the ‘How to teach’ section. The contents of the course are briefly described in the ‘What to teach’ section with some real examples. The author wishes to share his teaching experiences and the online teaching materials with colleagues working in bioinformatics education both in local and international universities. PMID:24008274

  7. Teaching the ABCs of bioinformatics: a brief introduction to the Applied Bioinformatics Course.

    PubMed

    Luo, Jingchu

    2014-11-01

    With the development of the Internet and the growth of online resources, bioinformatics training for wet-lab biologists became necessary as a part of their education. This article describes a one-semester course 'Applied Bioinformatics Course' (ABC, http://abc.cbi.pku.edu.cn/) that the author has been teaching to biological graduate students at the Peking University and the Chinese Academy of Agricultural Sciences for the past 13 years. ABC is a hands-on practical course to teach students to use online bioinformatics resources to solve biological problems related to their ongoing research projects in molecular biology. With a brief introduction to the background of the course, detailed information about the teaching strategies of the course are outlined in the 'How to teach' section. The contents of the course are briefly described in the 'What to teach' section with some real examples. The author wishes to share his teaching experiences and the online teaching materials with colleagues working in bioinformatics education both in local and international universities.

  8. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    ERIC Educational Resources Information Center

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  9. The complement regulator C4b-binding protein analyzed by molecular modeling, bioinformatics and computer-aided experimental design.

    PubMed

    Villoutreix, B O; Blom, A M; Webb, J; Dahlbäck, B

    1999-05-01

    Molecular modeling and bioinformatics have gained recognition as scientific disciplines of importance in the field of biomedical research. Molecular modeling not only allows to predict the three-dimensional structure of a protein but also helps to define its function. Careful incorporation of the experimental findings in the structural/theoretical data provides means to understand molecular mechanisms for highly complex biological systems. C4b-binding protein (C4BP) is composed of one beta-chain and seven alpha-chains essentially built from three- and eight-complement control protein (CCP) modules, respectively, followed by a non-repeat carboxy-terminal region involved in polymerization of the chains. C4BP is involved in the regulation of the complement system and interacts with many molecules such as C4b, Arp, protein S and heparin. Here, we report experimental and computer data obtained for C4BP. Protein modeling together with site directed mutagenesis indicate that R39, R64 and R66 from the C4BP alpha-chain form a key binding site for heparin, suggesting that this region could be of major importance for interaction with C4b. We also propose that the first CCP of the C4BP beta-chain displays a key hydrophobic surface of major importance for the interaction with the coagulation cofactor protein S. PMID:10408373

  10. Discovery of novel xylosides in co-culture of basidiomycetes Trametes versicolor and Ganoderma applanatum by integrated metabolomics and bioinformatics.

    PubMed

    Yao, Lu; Zhu, Li-Ping; Xu, Xiao-Yan; Tan, Ling-Ling; Sadilek, Martin; Fan, Huan; Hu, Bo; Shen, Xiao-Ting; Yang, Jie; Qiao, Bin; Yang, Song

    2016-01-01

    Transcriptomic analysis of cultured fungi suggests that many genes for secondary metabolite synthesis are presumably silent under standard laboratory condition. In order to investigate the expression of silent genes in symbiotic systems, 136 fungi-fungi symbiotic systems were built up by co-culturing seventeen basidiomycetes, among which the co-culture of Trametes versicolor and Ganoderma applanatum demonstrated the strongest coloration of confrontation zones. Metabolomics study of this co-culture discovered that sixty-two features were either newly synthesized or highly produced in the co-culture compared with individual cultures. Molecular network analysis highlighted a subnetwork including two novel xylosides (compounds 2 and 3). Compound 2 was further identified as N-(4-methoxyphenyl)formamide 2-O-β-D-xyloside and was revealed to have the potential to enhance the cell viability of human immortalized bronchial epithelial cell line of Beas-2B. Moreover, bioinformatics and transcriptional analysis of T. versicolor revealed a potential candidate gene (GI: 636605689) encoding xylosyltransferases for xylosylation. Additionally, 3-phenyllactic acid and orsellinic acid were detected for the first time in G. applanatum, which may be ascribed to response against T.versicolor stress. In general, the described co-culture platform provides a powerful tool to discover novel metabolites and help gain insights into the mechanism of silent gene activation in fungal defense. PMID:27616058

  11. Discovery of novel xylosides in co-culture of basidiomycetes Trametes versicolor and Ganoderma applanatum by integrated metabolomics and bioinformatics

    PubMed Central

    Yao, Lu; Zhu, Li-Ping; Xu, Xiao-Yan; Tan, Ling-Ling; Sadilek, Martin; Fan, Huan; Hu, Bo; Shen, Xiao-Ting; Yang, Jie; Qiao, Bin; Yang, Song

    2016-01-01

    Transcriptomic analysis of cultured fungi suggests that many genes for secondary metabolite synthesis are presumably silent under standard laboratory condition. In order to investigate the expression of silent genes in symbiotic systems, 136 fungi-fungi symbiotic systems were built up by co-culturing seventeen basidiomycetes, among which the co-culture of Trametes versicolor and Ganoderma applanatum demonstrated the strongest coloration of confrontation zones. Metabolomics study of this co-culture discovered that sixty-two features were either newly synthesized or highly produced in the co-culture compared with individual cultures. Molecular network analysis highlighted a subnetwork including two novel xylosides (compounds 2 and 3). Compound 2 was further identified as N-(4-methoxyphenyl)formamide 2-O-β-D-xyloside and was revealed to have the potential to enhance the cell viability of human immortalized bronchial epithelial cell line of Beas-2B. Moreover, bioinformatics and transcriptional analysis of T. versicolor revealed a potential candidate gene (GI: 636605689) encoding xylosyltransferases for xylosylation. Additionally, 3-phenyllactic acid and orsellinic acid were detected for the first time in G. applanatum, which may be ascribed to response against T.versicolor stress. In general, the described co-culture platform provides a powerful tool to discover novel metabolites and help gain insights into the mechanism of silent gene activation in fungal defense. PMID:27616058

  12. A scalable neuristor built with Mott memristors.

    PubMed

    Pickett, Matthew D; Medeiros-Ribeiro, Gilberto; Williams, R Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors. PMID:23241533

  13. A scalable neuristor built with Mott memristors

    NASA Astrophysics Data System (ADS)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  14. Translational Bioinformatics: Linking the Molecular World to the Clinical World

    PubMed Central

    Altman, RB

    2014-01-01

    Translational bioinformatics represents the union of translational medicine and bioinformatics. Translational medicine moves basic biological discoveries from the research bench into the patient-care setting and uses clinical observations to inform basic biology. It focuses on patient care, including the creation of new diagnostics, prognostics, prevention strategies, and therapies based on biological discoveries. Bioinformatics involves algorithms to represent, store, and analyze basic biological data, including DNA sequence, RNA expression, and protein and small-molecule abundance within cells. Translational bioinformatics spans these two fields; it involves the development of algorithms to analyze basic molecular and cellular data with an explicit goal of affecting clinical care. PMID:22549287

  15. Receptor-binding sites: bioinformatic approaches.

    PubMed

    Flower, Darren R

    2006-01-01

    It is increasingly clear that both transient and long-lasting interactions between biomacromolecules and their molecular partners are the most fundamental of all biological mechanisms and lie at the conceptual heart of protein function. In particular, the protein-binding site is the most fascinating and important mechanistic arbiter of protein function. In this review, I examine the nature of protein-binding sites found in both ligand-binding receptors and substrate-binding enzymes. I highlight two important concepts underlying the identification and analysis of binding sites. The first is based on knowledge: when one knows the location of a binding site in one protein, one can "inherit" the site from one protein to another. The second approach involves the a priori prediction of a binding site from a sequence or a structure. The full and complete analysis of binding sites will necessarily involve the full range of informatic techniques ranging from sequence-based bioinformatic analysis through structural bioinformatics to computational chemistry and molecular physics. Integration of both diverse experimental and diverse theoretical approaches is thus a mandatory requirement in the evaluation of binding sites and the binding events that occur within them. PMID:16671408

  16. Bioinformatics for cancer immunology and immunotherapy.

    PubMed

    Charoentong, Pornpimol; Angelova, Mihaela; Efremova, Mirjana; Gallasch, Ralf; Hackl, Hubert; Galon, Jerome; Trajanoski, Zlatko

    2012-11-01

    Recent mechanistic insights obtained from preclinical studies and the approval of the first immunotherapies has motivated increasing number of academic investigators and pharmaceutical/biotech companies to further elucidate the role of immunity in tumor pathogenesis and to reconsider the role of immunotherapy. Additionally, technological advances (e.g., next-generation sequencing) are providing unprecedented opportunities to draw a comprehensive picture of the tumor genomics landscape and ultimately enable individualized treatment. However, the increasing complexity of the generated data and the plethora of bioinformatics methods and tools pose considerable challenges to both tumor immunologists and clinical oncologists. In this review, we describe current concepts and future challenges for the management and analysis of data for cancer immunology and immunotherapy. We first highlight publicly available databases with specific focus on cancer immunology including databases for somatic mutations and epitope databases. We then give an overview of the bioinformatics methods for the analysis of next-generation sequencing data (whole-genome and exome sequencing), epitope prediction tools as well as methods for integrative data analysis and network modeling. Mathematical models are powerful tools that can predict and explain important patterns in the genetic and clinical progression of cancer. Therefore, a survey of mathematical models for tumor evolution and tumor-immune cell interaction is included. Finally, we discuss future challenges for individualized immunotherapy and suggest how a combined computational/experimental approaches can lead to new insights into the molecular mechanisms of cancer, improved diagnosis, and prognosis of the disease and pinpoint novel therapeutic targets.

  17. Bioinformatics analysis of Brucella vaccines and vaccine targets using VIOLIN

    PubMed Central

    2010-01-01

    Background Brucella spp. are Gram-negative, facultative intracellular bacteria that cause brucellosis, one of the commonest zoonotic diseases found worldwide in humans and a variety of animal species. While several animal vaccines are available, there is no effective and safe vaccine for prevention of brucellosis in humans. VIOLIN (http://www.violinet.org) is a web-based vaccine database and analysis system that curates, stores, and analyzes published data of commercialized vaccines, and vaccines in clinical trials or in research. VIOLIN contains information for 454 vaccines or vaccine candidates for 73 pathogens. VIOLIN also contains many bioinformatics tools for vaccine data analysis, data integration, and vaccine target prediction. To demonstrate the applicability of VIOLIN for vaccine research, VIOLIN was used for bioinformatics analysis of existing Brucella vaccines and prediction of new Brucella vaccine targets. Results VIOLIN contains many literature mining programs (e.g., Vaxmesh) that provide in-depth analysis of Brucella vaccine literature. As a result of manual literature curation, VIOLIN contains information for 38 Brucella vaccines or vaccine candidates, 14 protective Brucella antigens, and 68 host response studies to Brucella vaccines from 97 peer-reviewed articles. These Brucella vaccines are classified in the Vaccine Ontology (VO) system and used for different ontological applications. The web-based VIOLIN vaccine target prediction program Vaxign was used to predict new Brucella vaccine targets. Vaxign identified 14 outer membrane proteins that are conserved in six virulent strains from B. abortus, B. melitensis, and B. suis that are pathogenic in humans. Of the 14 membrane proteins, two proteins (Omp2b and Omp31-1) are not present in B. ovis, a Brucella species that is not pathogenic in humans. Brucella vaccine data stored in VIOLIN were compared and analyzed using the VIOLIN query system. Conclusions Bioinformatics curation and ontological

  18. Built Environment Analysis Tool: April 2013

    SciTech Connect

    Porter, C.

    2013-05-01

    This documentation describes the tool development. It was created to evaluate the effects of built environment scenarios on transportation energy and greenhouse gas (GHG) emissions. This documentation also provides guidance on how to apply the tool.

  19. Principles of As-Built Engineering

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1996-11-01

    As-Built Engineering is a product realization methodology founded on the notion that life-cycle engineering should be based on what is actually produced and not on what is nominally designed. As-Built Engineering is a way of thinking about the production realization process that enables customization in mass production environments. It questions the relevance of nominal based methods of engineering and the role that tolerancing plays in product realization. As-Built Engineering recognizes that there will always be errors associated with manufacturing that cannot be controlled and therefore need to be captured in order to fully characterize each individual product`s unique attributes. One benefit of As-Built Engineering is the ability to provide actual product information to designers and analysts enabling them to verify their assumptions using actual part and assembly data. Another benefit is the ability to optimize new and re-engineered assemblies.

  20. A novel tool for assessing and summarizing the built environment

    PubMed Central

    2012-01-01

    Background A growing corpus of research focuses on assessing the quality of the local built environment and also examining the relationship between the built environment and health outcomes and indicators in communities. However, there is a lack of research presenting a highly resolved, systematic, and comprehensive spatial approach to assessing the built environment over a large geographic extent. In this paper, we contribute to the built environment literature by describing a tool used to assess the residential built environment at the tax parcel-level, as well as a methodology for summarizing the data into meaningful indices for linkages with health data. Methods A database containing residential built environment variables was constructed using the existing body of literature, as well as input from local community partners. During the summer of 2008, a team of trained assessors conducted an on-foot, curb-side assessment of approximately 17,000 tax parcels in Durham, North Carolina, evaluating the built environment on over 80 variables using handheld Global Positioning System (GPS) devices. The exercise was repeated again in the summer of 2011 over a larger geographic area that included roughly 30,700 tax parcels; summary data presented here are from the 2008 assessment. Results Built environment data were combined with Durham crime data and tax assessor data in order to construct seven built environment indices. These indices were aggregated to US Census blocks, as well as to primary adjacency communities (PACs) and secondary adjacency communities (SACs) which better described the larger neighborhood context experienced by local residents. Results were disseminated to community members, public health professionals, and government officials. Conclusions The assessment tool described is both easily-replicable and comprehensive in design. Furthermore, our construction of PACs and SACs introduces a novel concept to approximate varying scales of community and

  1. The Built Environment Predicts Observed Physical Activity

    PubMed Central

    Kelly, Cheryl; Wilson, Jeffrey S.; Schootman, Mario; Clennin, Morgan; Baker, Elizabeth A.; Miller, Douglas K.

    2014-01-01

    Background: In order to improve our understanding of the relationship between the built environment and physical activity, it is important to identify associations between specific geographic characteristics and physical activity behaviors. Purpose: Examine relationships between observed physical activity behavior and measures of the built environment collected on 291 street segments in Indianapolis and St. Louis. Methods: Street segments were selected using a stratified geographic sampling design to ensure representation of neighborhoods with different land use and socioeconomic characteristics. Characteristics of the built environment on-street segments were audited using two methods: in-person field audits and audits based on interpretation of Google Street View imagery with each method blinded to results from the other. Segments were dichotomized as having a particular characteristic (e.g., sidewalk present or not) based on the two auditing methods separately. Counts of individuals engaged in different forms of physical activity on each segment were assessed using direct observation. Non-parametric statistics were used to compare counts of physically active individuals on each segment with built environment characteristic. Results: Counts of individuals engaged in physical activity were significantly higher on segments with mixed land use or all non-residential land use, and on segments with pedestrian infrastructure (e.g., crosswalks and sidewalks) and public transit. Conclusion: Several micro-level built environment characteristics were associated with physical activity. These data provide support for theories that suggest changing the built environment and related policies may encourage more physical activity. PMID:24904916

  2. Bioinformatics and the Politics of Innovation in the Life Sciences

    PubMed Central

    Zhou, Yinhua; Datta, Saheli; Salter, Charlotte

    2016-01-01

    The governments of China, India, and the United Kingdom are unanimous in their belief that bioinformatics should supply the link between basic life sciences research and its translation into health benefits for the population and the economy. Yet at the same time, as ambitious states vying for position in the future global bioeconomy they differ considerably in the strategies adopted in pursuit of this goal. At the heart of these differences lies the interaction between epistemic change within the scientific community itself and the apparatus of the state. Drawing on desk-based research and thirty-two interviews with scientists and policy makers in the three countries, this article analyzes the politics that shape this interaction. From this analysis emerges an understanding of the variable capacities of different kinds of states and political systems to work with science in harnessing the potential of new epistemic territories in global life sciences innovation. PMID:27546935

  3. Achievements and challenges in structural bioinformatics and computational biophysics

    PubMed Central

    Samish, Ilan; Bourne, Philip E.; Najmanovich, Rafael J.

    2015-01-01

    Motivation: The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. Results: An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. Conclusion: The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. Contact: Rafael.Najmanovich@USherbrooke.ca PMID:25488929

  4. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    PubMed Central

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  5. Is there room for ethics within bioinformatics education?

    PubMed

    Taneri, Bahar

    2011-07-01

    When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.

  6. Assessment of a Bioinformatics across Life Science Curricula Initiative

    ERIC Educational Resources Information Center

    Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.

    2007-01-01

    At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…

  7. Evaluating an Inquiry-Based Bioinformatics Course Using Q Methodology

    ERIC Educational Resources Information Center

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-01-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…

  8. Bioinformatics education dissemination with an evolutionary problem solving perspective.

    PubMed

    Jungck, John R; Donovan, Samuel S; Weisstein, Anton E; Khiripet, Noppadon; Everse, Stephen J

    2010-11-01

    Bioinformatics is central to biology education in the 21st century. With the generation of terabytes of data per day, the application of computer-based tools to stored and distributed data is fundamentally changing research and its application to problems in medicine, agriculture, conservation and forensics. In light of this 'information revolution,' undergraduate biology curricula must be redesigned to prepare the next generation of informed citizens as well as those who will pursue careers in the life sciences. The BEDROCK initiative (Bioinformatics Education Dissemination: Reaching Out, Connecting and Knitting together) has fostered an international community of bioinformatics educators. The initiative's goals are to: (i) Identify and support faculty who can take leadership roles in bioinformatics education; (ii) Highlight and distribute innovative approaches to incorporating evolutionary bioinformatics data and techniques throughout undergraduate education; (iii) Establish mechanisms for the broad dissemination of bioinformatics resource materials and teaching models; (iv) Emphasize phylogenetic thinking and problem solving; and (v) Develop and publish new software tools to help students develop and test evolutionary hypotheses. Since 2002, BEDROCK has offered more than 50 faculty workshops around the world, published many resources and supported an environment for developing and sharing bioinformatics education approaches. The BEDROCK initiative builds on the established pedagogical philosophy and academic community of the BioQUEST Curriculum Consortium to assemble the diverse intellectual and human resources required to sustain an international reform effort in undergraduate bioinformatics education. PMID:21036947

  9. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    ERIC Educational Resources Information Center

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  10. Quantum Bio-Informatics II From Quantum Information to Bio-Informatics

    NASA Astrophysics Data System (ADS)

    Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori

    2009-02-01

    The problem of quantum-like representation in economy cognitive science, and genetics / L. Accardi, A. Khrennikov and M. Ohya -- Chaotic behavior observed in linea dynamics / M. Asano, T. Yamamoto and Y. Togawa -- Complete m-level quantum teleportation based on Kossakowski-Ohya scheme / M. Asano, M. Ohya and Y. Tanaka -- Towards quantum cybernetics: optimal feedback control in quantum bio informatics / V. P. Belavkin -- Quantum entanglement and circulant states / D. Chruściński -- The compound Fock space and its application in brain models / K. -H. Fichtner and W. Freudenberg -- Characterisation of beam splitters / L. Fichtner and M. Gäbler -- Application of entropic chaos degree to a combined quantum baker's map / K. Inoue, M. Ohya and I. V. Volovich -- On quantum algorithm for multiple alignment of amino acid sequences / S. Iriyama and M. Ohya --Quantum-like models for decision making in psychology and cognitive science / A. Khrennikov -- On completely positive non-Markovian evolution of a d-level system / A. Kossakowski and R. Rebolledo -- Measures of entanglement - a Hilbert space approach / W. A. Majewski -- Some characterizations of PPT states and their relation / T. Matsuoka -- On the dynamics of entanglement and characterization ofentangling properties of quantum evolutions / M. Michalski -- Perspective from micro-macro duality - towards non-perturbative renormalization scheme / I. Ojima -- A simple symmetric algorithm using a likeness with Introns behavior in RNA sequences / M. Regoli -- Some aspects of quadratic generalized white noise functionals / Si Si and T. Hida -- Analysis of several social mobility data using measure of departure from symmetry / K. Tahata ... [et al.] -- Time in physics and life science / I. V. Volovich -- Note on entropies in quantum processes / N. Watanabe -- Basics of molecular simulation and its application to biomolecules / T. Ando and I. Yamato -- Theory of proton-induced superionic conduction in hydrogen-bonded systems

  11. Bioinformatics by Example: From Sequence to Target

    NASA Astrophysics Data System (ADS)

    Kossida, Sophia; Tahri, Nadia; Daizadeh, Iraj

    2002-12-01

    With the completion of the human genome, and the imminent completion of other large-scale sequencing and structure-determination projects, computer-assisted bioscience is aimed to become the new paradigm for conducting basic and applied research. The presence of these additional bioinformatics tools stirs great anxiety for experimental researchers (as well as for pedagogues), since they are now faced with a wider and deeper knowledge of differing disciplines (biology, chemistry, physics, mathematics, and computer science). This review targets those individuals who are interested in using computational methods in their teaching or research. By analyzing a real-life, pharmaceutical, multicomponent, target-based example the reader will experience this fascinating new discipline.

  12. Wrapping and interoperating bioinformatics resources using CORBA.

    PubMed

    Stevens, R; Miller, C

    2000-02-01

    Bioinformaticians seeking to provide services to working biologists are faced with the twin problems of distribution and diversity of resources. Bioinformatics databases are distributed around the world and exist in many kinds of storage forms, platforms and access paradigms. To provide adequate services to biologists, these distributed and diverse resources have to interoperate seamlessly within single applications. The Common Object Request Broker Architecture (CORBA) offers one technical solution to these problems. The key component of CORBA is its use of object orientation as an intermediate form to translate between different representations. This paper concentrates on an explanation of object orientation and how it can be used to overcome the problems of distribution and diversity by describing the interfaces between objects.

  13. Rapid Bioinformatic Identification of Thermostabilizing Mutations

    PubMed Central

    Sauer, David B.; Karpowich, Nathan K.; Song, Jin Mei; Wang, Da-Neng

    2015-01-01

    Ex vivo stability is a valuable protein characteristic but is laborious to improve experimentally. In addition to biopharmaceutical and industrial applications, stable protein is important for biochemical and structural studies. Taking advantage of the large number of available genomic sequences and growth temperature data, we present two bioinformatic methods to identify a limited set of amino acids or positions that likely underlie thermostability. Because these methods allow thousands of homologs to be examined in silico, they have the advantage of providing both speed and statistical power. Using these methods, we introduced, via mutation, amino acids from thermoadapted homologs into an exemplar mesophilic membrane protein, and demonstrated significantly increased thermostability while preserving protein activity. PMID:26445442

  14. The European Bioinformatics Institute's data resources.

    PubMed

    Brooksbank, Catherine; Camon, Evelyn; Harris, Midori A; Magrane, Michele; Martin, Maria Jesus; Mulder, Nicola; O'Donovan, Claire; Parkinson, Helen; Tuli, Mary Ann; Apweiler, Rolf; Birney, Ewan; Brazma, Alvis; Henrick, Kim; Lopez, Rodrigo; Stoesser, Guenter; Stoehr, Peter; Cameron, Graham

    2003-01-01

    As the amount of biological data grows, so does the need for biologists to store and access this information in central repositories in a free and unambiguous manner. The European Bioinformatics Institute (EBI) hosts six core databases, which store information on DNA sequences (EMBL-Bank), protein sequences (SWISS-PROT and TrEMBL), protein structure (MSD), whole genomes (Ensembl) and gene expression (ArrayExpress). But just as a cell would be useless if it couldn't transcribe DNA or translate RNA, our resources would be compromised if each existed in isolation. We have therefore developed a range of tools that not only facilitate the deposition and retrieval of biological information, but also allow users to carry out searches that reflect the interconnectedness of biological information. The EBI's databases and tools are all available on our website at www.ebi.ac.uk. PMID:12519944

  15. Bioinformatics Analysis of Estrogen-Responsive Genes.

    PubMed

    Handel, Adam E

    2016-01-01

    Estrogen is a steroid hormone that plays critical roles in a myriad of intracellular pathways. The expression of many genes is regulated through the steroid hormone receptors ESR1 and ESR2. These bind to DNA and modulate the expression of target genes. Identification of estrogen target genes is greatly facilitated by the use of transcriptomic methods, such as RNA-seq and expression microarrays, and chromatin immunoprecipitation with massively parallel sequencing (ChIP-seq). Combining transcriptomic and ChIP-seq data enables a distinction to be drawn between direct and indirect estrogen target genes. This chapter discusses some methods of identifying estrogen target genes that do not require any expertise in programming languages or complex bioinformatics. PMID:26585125

  16. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  17. The Foundations of Lifelong Health Are Built in Early Childhood

    ERIC Educational Resources Information Center

    National Forum on Early Childhood Policy and Programs, 2010

    2010-01-01

    A vital and productive society with a prosperous and sustainable future is built on a foundation of healthy child development. Health in the earliest years--beginning with the future mother's well-being before she becomes pregnant--lays the groundwork for a lifetime of vitality. When developing biological systems are strengthened by positive early…

  18. An Interactive Multimedia Learning Environment for VLSI Built with COSMOS

    ERIC Educational Resources Information Center

    Angelides, Marios C.; Agius, Harry W.

    2002-01-01

    This paper presents Bigger Bits, an interactive multimedia learning environment that teaches students about VLSI within the context of computer electronics. The system was built with COSMOS (Content Oriented semantic Modelling Overlay Scheme), which is a modelling scheme that we developed for enabling the semantic content of multimedia to be used…

  19. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  20. Comparison of Online and Onsite Bioinformatics Instruction for a Fully Online Bioinformatics Master’s Program

    PubMed Central

    Obom, Kristina M.; Cummings, Patrick J.

    2007-01-01

    The completely online Master of Science in Bioinformatics program differs from the onsite program only in the mode of content delivery. Analysis of student satisfaction indicates no statistically significant difference between most online and onsite student responses, however, online and onsite students do differ significantly in their responses to a few questions on the course evaluation queries. Analysis of student exam performance using three assessments indicates that there was no significant difference in grades earned by students in online and onsite courses. These results suggest that our model for online bioinformatics education provides students with a rigorous course of study that is comparable to onsite course instruction and possibly provides a more rigorous course load and more opportunities for participation. PMID:23653816

  1. Design of Wrapper Integration Within the DataFoundry Bioinformatics Application

    SciTech Connect

    Anderson, J; Critchlow, T

    2002-08-20

    The DataFoundry bioinformatics application was designed to enable scientists to directly interact with large datasets, gathered from multiple remote data sources, through a graphical, interactive interface. Gathering information from multiple data sources, integrating that data, and providing an interface to the accumulated data is non-trivial. Advanced techniques are required to develop a solution that adequately completes this task. One possible solution to this problem involves the use of specialized information access programs that are able to access information and transmute that information to a form usable by a single application. These information access programs, called wrappers, were decided to be the most appropriate way to extend the DataFoundry bioinformatics application to support data integration from multiple sources. By adding wrapper support into the DataFoundry application, it is hoped that this system will be able to provide a single access point to bioinformatics data for scientists. We describe some of the computer science concepts, design, and the implementation of adding wrapper support into the DataFoundry bioinformatics application, and then discuss issues of performance.

  2. The Zebrafish DVD Exchange Project: a bioinformatics initiative.

    PubMed

    Cooper, Mark S; Sommers-Herivel, Greg; Poage, Cara T; McCarthy, Matthew B; Crawford, Bryan D; Phillips, Carey

    2004-01-01

    Scientists who study zebrafish currently have an acute need to increase the rate of visual data exchange within their international community. Although the Internet has provided a revolutionary transformation of information exchange, the Internet is at present unable to serve as a vehicle for the efficient exchange of massive amounts of visual information. Much like an overburdened public water system, the Internet has inherent limits to the services it can provide. It is possible, however, for zebrafishologists to develop and use virtual intranets (such as the approach we outlined in this chapter) to adapt to the growing informatics need of our expanding research community. We need to assess qualitatively the economics of visual bioinformatics in our research community and evaluate the benefit:investment ratio of our collective information-sharing activities. The development of the World Wide Web started in the early 1990s by particle physicists who needed to rapidly exchange visual information within their collaborations. However, because of current limitations in information bandwidth, the World Wide Web cannot be used to easily exchange gigabytes of visual information. The Zebrafish DVD Exchange Project is aimed at by-passing these limitations. Scientists are curiosity-driven tool makers as well as curiosity-driven tool users. We have the capacity to assimilate new tools, as well as to develop new innovations, to serve our collective research needs. As a proactive research community, we need to create new data transfer methodologies (e.g., the Zebrafish DVD Exchange Project) to stay ahead of our bioinformatics needs. PMID:15602926

  3. Bioinformatics for Diagnostics, Forensics, and Virulence Characterization and Detection

    SciTech Connect

    Gardner, S; Slezak, T

    2005-04-05

    We summarize four of our group's high-risk/high-payoff research projects funded by the Intelligence Technology Innovation Center (ITIC) in conjunction with our DHS-funded pathogen informatics activities. These are (1) quantitative assessment of genomic sequencing needs to predict high quality DNA and protein signatures for detection, and comparison of draft versus finished sequences for diagnostic signature prediction; (2) development of forensic software to identify SNP and PCR-RFLP variations from a large number of viral pathogen sequences and optimization of the selection of markers for maximum discrimination of those sequences; (3) prediction of signatures for the detection of virulence, antibiotic resistance, and toxin genes and genetic engineering markers in bacteria; (4) bioinformatic characterization of virulence factors to rapidly screen genomic data for potential genes with similar functions and to elucidate potential health threats in novel organisms. The results of (1) are being used by policy makers to set national sequencing priorities. Analyses from (2) are being used in collaborations with the CDC to genotype and characterize many variola strains, and reports from these collaborations have been made to the President. We also determined SNPs for serotype and strain discrimination of 126 foot and mouth disease virus (FMDV) genomes. For (3), currently >1000 probes have been predicted for the specific detection of >4000 virulence, antibiotic resistance, and genetic engineering vector sequences, and we expect to complete the bioinformatic design of a comprehensive ''virulence detection chip'' by August 2005. Results of (4) will be a system to rapidly predict potential virulence pathways and phenotypes in organisms based on their genomic sequences.

  4. Bioinformatic Analysis of HIV-1 Entry and Pathogenesis

    PubMed Central

    Aiamkitsumrit, Benjamas; Dampier, Will; Antell, Gregory; Rivera, Nina; Martin-Garcia, Julio; Pirrone, Vanessa; Nonnemacher, Michael R.; Wigdahl, Brian

    2015-01-01

    The evolution of human immunodeficiency virus type 1 (HIV-1) with respect to co-receptor utilization has been shown to be relevant to HIV-1 pathogenesis and disease. The CCR5-utilizing (R5) virus has been shown to be important in the very early stages of transmission and highly prevalent during asymptomatic infection and chronic disease. In addition, the R5 virus has been proposed to be involved in neuroinvasion and central nervous system (CNS) disease. In contrast, the CXCR4-utilizing (X4) virus is more prevalent during the course of disease progression and concurrent with the loss of CD4+ T cells. The dual-tropic virus is able to utilize both co-receptors (CXCR4 and CCR5) and has been thought to represent an intermediate transitional virus that possesses properties of both X4 and R5 viruses that can be encountered at many stages of disease. The use of computational tools and bioinformatic approaches in the prediction of HIV-1 co-receptor usage has been growing in importance with respect to understanding HIV-1 pathogenesis and disease, developing diagnostic tools, and improving the efficacy of therapeutic strategies focused on blocking viral entry. Current strategies have enhanced the sensitivity, specificity, and reproducibility relative to the prediction of co-receptor use; however, these technologies need to be improved with respect to their efficient and accurate use across the HIV-1 subtypes. The most effective approach may center on the combined use of different algorithms involving sequences within and outside of the env-V3 loop. This review focuses on the HIV-1 entry process and on co-receptor utilization, including bioinformatic tools utilized in the prediction of co-receptor usage. It also provides novel preliminary analyses for enabling identification of linkages between amino acids in V3 with other components of the HIV-1 genome and demonstrates that these linkages are different between X4 and R5 viruses. PMID:24862329

  5. Built Environment Energy Analysis Tool Overview (Presentation)

    SciTech Connect

    Porter, C.

    2013-04-01

    This presentation provides an overview of the Built Environment Energy Analysis Tool, which is designed to assess impacts of future land use/built environment patterns on transportation-related energy use and greenhouse gas (GHG) emissions. The tool can be used to evaluate a range of population distribution and urban design scenarios for 2030 and 2050. This tool was produced as part of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  6. Knowledge from data in the built environment.

    PubMed

    Starkey, Christopher; Garvin, Chris

    2013-08-01

    Data feedback is changing our relationship to the built environment. Both traditional and new sources of data are developing rapidly, compelled by efforts to optimize the performance of human habitats. However, there are many obstacles to the successful implementation of information-centered environments that continue to hinder widespread adoption. This paper identifies these obstacles and challenges and describes emerging data-rich analytic techniques in infrastructure, buildings, and building portfolios. Further, it speculates on the impact that a robust data sphere may have on the built environment and posits that linkages to other data sets may enable paradigm shifts in sustainability and resiliency.

  7. Platform for a better built environment.

    PubMed

    Baillie, Jonathan

    2014-08-01

    IHEEM's recently established Architecture and Design of the Built Environment Technical Platform (ADBETP) is now firmly up and running, and, as one of its members, Gary Mortimer, general manager, Facilities & Estates, at NHS Grampian, puts it, is 'determined to bring tangible, positive, and sustainable benefits to the NHS built environment to support the effective delivery of changing clinical needs'. Equally, the Platform hopes its activities will 'add value to IHEEM members, technical professionals in health construction and operational management, and other healthcare professionals working in NHS buildings'.

  8. The microbiome of the built environment and mental health.

    PubMed

    Hoisington, Andrew J; Brenner, Lisa A; Kinney, Kerry A; Postolache, Teodor T; Lowry, Christopher A

    2015-12-17

    The microbiome of the built environment (MoBE) is a relatively new area of study. While some knowledge has been gained regarding impacts of the MoBE on the human microbiome and disease vulnerability, there is little knowledge of the impacts of the MoBE on mental health. Depending on the specific microbial species involved, the transfer of microorganisms from the built environment to occupant's cutaneous or mucosal membranes has the potential to increase or disrupt immunoregulation and/or exaggerate or suppress inflammation. Preclinical evidence highlighting the influence of the microbiota on systemic inflammation supports the assertion that microorganisms, including those originating from the built environment, have the potential to either increase or decrease the risk of inflammation-induced psychiatric conditions and their symptom severity. With advanced understanding of both the ecology of the built environment, and its influence on the human microbiome, it may be possible to develop bioinformed strategies for management of the built environment to promote mental health. Here we present a brief summary of microbiome research in both areas and highlight two interdependencies including the following: (1) effects of the MoBE on the human microbiome and (2) potential opportunities for manipulation of the MoBE in order to improve mental health. In addition, we propose future research directions including strategies for assessment of changes in the microbiome of common areas of built environments shared by multiple human occupants, and associated cohort-level changes in the mental health of those who spend time in the buildings. Overall, our understanding of the fields of both the MoBE and influence of host-associated microorganisms on mental health are advancing at a rapid pace and, if linked, could offer considerable benefit to health and wellness.

  9. Continuing Education Workshops in Bioinformatics Positively Impact Research and Careers

    PubMed Central

    Brazas, Michelle D.; Ouellette, B. F. Francis

    2016-01-01

    Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression. PMID:27281025

  10. Schooling Built on the Multiple Intelligences

    ERIC Educational Resources Information Center

    Kunkel, Christine D.

    2009-01-01

    This article features a school built on multiple intelligences. As the first multiple intelligences school in the world, the Key Learning Community shapes its students' days to include significant time in the musical, spatial and bodily-kinesthetic intelligences, as well as the more traditional areas of logical-mathematical and linguistics. In…

  11. Children in the Built Environment: A Bibliography.

    ERIC Educational Resources Information Center

    Department of Housing and Urban Development, Washington, DC. Office of International Affairs.

    Documents cited in this annotated bibliography focus on the often neglected problems of children in the "built environment": at home, at play, at school, and in the community. Twenty entries are from foreign countries; 74 are from the United States. It is hoped that these references will be useful to all who are interested in problems and programs…

  12. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive

  13. Thriving in multidisciplinary research: advice for new bioinformatics students.

    PubMed

    Auerbach, Raymond K

    2012-09-01

    The sciences have seen a large increase in demand for students in bioinformatics and multidisciplinary fields in general. Many new educational programs have been created to satisfy this demand, but navigating these programs requires a non-traditional outlook and emphasizes working in teams of individuals with distinct yet complementary skill sets. Written from the perspective of a current bioinformatics student, this article seeks to offer advice to prospective and current students in bioinformatics regarding what to expect in their educational program, how multidisciplinary fields differ from more traditional paths, and decisions that they will face on the road to becoming successful, productive bioinformaticists.

  14. Bioinformatic tools for microRNA dissection

    PubMed Central

    Akhtar, Most Mauluda; Micolucci, Luigina; Islam, Md Soriful; Olivieri, Fabiola; Procopio, Antonio Domenico

    2016-01-01

    Recently, microRNAs (miRNAs) have emerged as important elements of gene regulatory networks. MiRNAs are endogenous single-stranded non-coding RNAs (∼22-nt long) that regulate gene expression at the post-transcriptional level. Through pairing with mRNA, miRNAs can down-regulate gene expression by inhibiting translation or stimulating mRNA degradation. In some cases they can also up-regulate the expression of a target gene. MiRNAs influence a variety of cellular pathways that range from development to carcinogenesis. The involvement of miRNAs in several human diseases, particularly cancer, makes them potential diagnostic and prognostic biomarkers. Recent technological advances, especially high-throughput sequencing, have led to an exponential growth in the generation of miRNA-related data. A number of bioinformatic tools and databases have been devised to manage this growing body of data. We analyze 129 miRNA tools that are being used in diverse areas of miRNA research, to assist investigators in choosing the most appropriate tools for their needs. PMID:26578605

  15. Bacterial bioinformatics: pathogenesis and the genome.

    PubMed

    Paine, Kelly; Flower, Darren R

    2002-07-01

    As the number of completed microbial genome sequences continues to grow, there is a pressing need for the exploitation of this wealth of data through a synergistic interaction between the well-established science of bacteriology and the emergent discipline of bioinformatics. Antibiotic resistance and pathogenicity in virulent bacteria has become an increasing problem, with even the strongest drugs useless against some species, such as multi-drug resistant Enterococcus faecium and Mycobacterium tuberculosis. The global spread of Human Immunodeficiency Virus (HIV) and Acquired Immune Deficiency Syndrome (AIDS) has contributed to the re-emergence of tuberculosis and the threat from new and emergent diseases. To address these problems, bacterial pathogenicity requires redefinition as Koch's postulates become obsolete. This review discusses how the use of bacterial genomic information, and the in silico tools available at present, may aid in determining the definition of a current pathogen. The combination of both fields should provide a rapid and efficient way of assisting in the future development of antimicrobial therapies. PMID:12125816

  16. Identifiying human MHC supertypes using bioinformatic methods.

    PubMed

    Doytchinova, Irini A; Guan, Pingping; Flower, Darren R

    2004-04-01

    Classification of MHC molecules into supertypes in terms of peptide-binding specificities is an important issue, with direct implications for the development of epitope-based vaccines with wide population coverage. In view of extremely high MHC polymorphism (948 class I and 633 class II HLA alleles) the experimental solution of this task is presently impossible. In this study, we describe a bioinformatics strategy for classifying MHC molecules into supertypes using information drawn solely from three-dimensional protein structure. Two chemometric techniques-hierarchical clustering and principal component analysis-were used independently on a set of 783 HLA class I molecules to identify supertypes based on structural similarities and molecular interaction fields calculated for the peptide binding site. Eight supertypes were defined: A2, A3, A24, B7, B27, B44, C1, and C4. The two techniques gave 77% consensus, i.e., 605 HLA class I alleles were classified in the same supertype by both methods. The proposed strategy allowed "supertype fingerprints" to be identified. Thus, the A2 supertype fingerprint is Tyr(9)/Phe(9), Arg(97), and His(114) or Tyr(116); the A3-Tyr(9)/Phe(9)/Ser(9), Ile(97)/Met(97) and Glu(114) or Asp(116); the A24-Ser(9) and Met(97); the B7-Asn(63) and Leu(81); the B27-Glu(63) and Leu(81); for B44-Ala(81); the C1-Ser(77); and the C4-Asn(77). PMID:15034046

  17. Bioinformatic tools for microRNA dissection.

    PubMed

    Akhtar, Most Mauluda; Micolucci, Luigina; Islam, Md Soriful; Olivieri, Fabiola; Procopio, Antonio Domenico

    2016-01-01

    Recently, microRNAs (miRNAs) have emerged as important elements of gene regulatory networks. MiRNAs are endogenous single-stranded non-coding RNAs (~22-nt long) that regulate gene expression at the post-transcriptional level. Through pairing with mRNA, miRNAs can down-regulate gene expression by inhibiting translation or stimulating mRNA degradation. In some cases they can also up-regulate the expression of a target gene. MiRNAs influence a variety of cellular pathways that range from development to carcinogenesis. The involvement of miRNAs in several human diseases, particularly cancer, makes them potential diagnostic and prognostic biomarkers. Recent technological advances, especially high-throughput sequencing, have led to an exponential growth in the generation of miRNA-related data. A number of bioinformatic tools and databases have been devised to manage this growing body of data. We analyze 129 miRNA tools that are being used in diverse areas of miRNA research, to assist investigators in choosing the most appropriate tools for their needs.

  18. Profiling, Bioinformatic, and Functional Data on the Developing Olfactory/GnRH System Reveal Cellular and Molecular Pathways Essential for This Process and Potentially Relevant for the Kallmann Syndrome

    PubMed Central

    Garaffo, Giulia; Provero, Paolo; Molineris, Ivan; Pinciroli, Patrizia; Peano, Clelia; Battaglia, Cristina; Tomaiuolo, Daniela; Etzion, Talya; Gothilf, Yoav; Santoro, Massimo; Merlo, Giorgio R.

    2013-01-01

    During embryonic development, immature neurons in the olfactory epithelium (OE) extend axons through the nasal mesenchyme, to contact projection neurons in the olfactory bulb. Axon navigation is accompanied by migration of the GnRH+ neurons, which enter the anterior forebrain and home in the septo-hypothalamic area. This process can be interrupted at various points and lead to the onset of the Kallmann syndrome (KS), a disorder characterized by anosmia and central hypogonadotropic hypogonadism. Several genes has been identified in human and mice that cause KS or a KS-like phenotype. In mice a set of transcription factors appears to be required for olfactory connectivity and GnRH neuron migration; thus we explored the transcriptional network underlying this developmental process by profiling the OE and the adjacent mesenchyme at three embryonic ages. We also profiled the OE from embryos null for Dlx5, a homeogene that causes a KS-like phenotype when deleted. We identified 20 interesting genes belonging to the following categories: (1) transmembrane adhesion/receptor, (2) axon-glia interaction, (3) scaffold/adapter for signaling, (4) synaptic proteins. We tested some of them in zebrafish embryos: the depletion of five (of six) Dlx5 targets affected axonal extension and targeting, while three (of three) affected GnRH neuron position and neurite organization. Thus, we confirmed the importance of cell–cell and cell-matrix interactions and identified new molecules needed for olfactory connection and GnRH neuron migration. Using available and newly generated data, we predicted/prioritized putative KS-disease genes, by building conserved co-expression networks with all known disease genes in human and mouse. The results show the overall validity of approaches based on high-throughput data and predictive bioinformatics to identify genes potentially relevant for the molecular pathogenesis of KS. A number of candidate will be discussed, that should be tested in future

  19. Profiling, Bioinformatic, and Functional Data on the Developing Olfactory/GnRH System Reveal Cellular and Molecular Pathways Essential for This Process and Potentially Relevant for the Kallmann Syndrome.

    PubMed

    Garaffo, Giulia; Provero, Paolo; Molineris, Ivan; Pinciroli, Patrizia; Peano, Clelia; Battaglia, Cristina; Tomaiuolo, Daniela; Etzion, Talya; Gothilf, Yoav; Santoro, Massimo; Merlo, Giorgio R

    2013-01-01

    During embryonic development, immature neurons in the olfactory epithelium (OE) extend axons through the nasal mesenchyme, to contact projection neurons in the olfactory bulb. Axon navigation is accompanied by migration of the GnRH+ neurons, which enter the anterior forebrain and home in the septo-hypothalamic area. This process can be interrupted at various points and lead to the onset of the Kallmann syndrome (KS), a disorder characterized by anosmia and central hypogonadotropic hypogonadism. Several genes has been identified in human and mice that cause KS or a KS-like phenotype. In mice a set of transcription factors appears to be required for olfactory connectivity and GnRH neuron migration; thus we explored the transcriptional network underlying this developmental process by profiling the OE and the adjacent mesenchyme at three embryonic ages. We also profiled the OE from embryos null for Dlx5, a homeogene that causes a KS-like phenotype when deleted. We identified 20 interesting genes belonging to the following categories: (1) transmembrane adhesion/receptor, (2) axon-glia interaction, (3) scaffold/adapter for signaling, (4) synaptic proteins. We tested some of them in zebrafish embryos: the depletion of five (of six) Dlx5 targets affected axonal extension and targeting, while three (of three) affected GnRH neuron position and neurite organization. Thus, we confirmed the importance of cell-cell and cell-matrix interactions and identified new molecules needed for olfactory connection and GnRH neuron migration. Using available and newly generated data, we predicted/prioritized putative KS-disease genes, by building conserved co-expression networks with all known disease genes in human and mouse. The results show the overall validity of approaches based on high-throughput data and predictive bioinformatics to identify genes potentially relevant for the molecular pathogenesis of KS. A number of candidate will be discussed, that should be tested in future

  20. Survey of Natural Language Processing Techniques in Bioinformatics.

    PubMed

    Zeng, Zhiqiang; Shi, Hua; Wu, Yun; Hong, Zhiling

    2015-01-01

    Informatics methods, such as text mining and natural language processing, are always involved in bioinformatics research. In this study, we discuss text mining and natural language processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and natural language processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and natural language processing researchers.

  1. Can we integrate bioinformatics data on the Internet?

    PubMed

    Martin, A C

    2001-09-01

    The NETTAB (Network Tools and Applications in Biology) 2001 Workshop entitled 'CORBA and XML: towards a bioinformatics-integrated network environment' was held at the Advanced Biotechnology Centre, Genoa, Italy, 17-18 May 2001.

  2. Metagenomics and Bioinformatics in Microbial Ecology: Current Status and Beyond

    PubMed Central

    Hiraoka, Satoshi; Yang, Ching-chia; Iwasaki, Wataru

    2016-01-01

    Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives. PMID:27383682

  3. Metagenomics and Bioinformatics in Microbial Ecology: Current Status and Beyond.

    PubMed

    Hiraoka, Satoshi; Yang, Ching-Chia; Iwasaki, Wataru

    2016-09-29

    Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives. PMID:27383682

  4. Bioinformatics opportunities for identification and study of medicinal plants

    PubMed Central

    Sharma, Vivekanand

    2013-01-01

    Plants have been used as a source of medicine since historic times and several commercially important drugs are of plant-based origin. The traditional approach towards discovery of plant-based drugs often times involves significant amount of time and expenditure. These labor-intensive approaches have struggled to keep pace with the rapid development of high-throughput technologies. In the era of high volume, high-throughput data generation across the biosciences, bioinformatics plays a crucial role. This has generally been the case in the context of drug designing and discovery. However, there has been limited attention to date to the potential application of bioinformatics approaches that can leverage plant-based knowledge. Here, we review bioinformatics studies that have contributed to medicinal plants research. In particular, we highlight areas in medicinal plant research where the application of bioinformatics methodologies may result in quicker and potentially cost-effective leads toward finding plant-based remedies. PMID:22589384

  5. Survey of Natural Language Processing Techniques in Bioinformatics

    PubMed Central

    Zeng, Zhiqiang; Shi, Hua; Wu, Yun; Hong, Zhiling

    2015-01-01

    Informatics methods, such as text mining and natural language processing, are always involved in bioinformatics research. In this study, we discuss text mining and natural language processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and natural language processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and natural language processing researchers. PMID:26525745

  6. Bioconductor: open software development for computational biology and bioinformatics

    PubMed Central

    Gentleman, Robert C; Carey, Vincent J; Bates, Douglas M; Bolstad, Ben; Dettling, Marcel; Dudoit, Sandrine; Ellis, Byron; Gautier, Laurent; Ge, Yongchao; Gentry, Jeff; Hornik, Kurt; Hothorn, Torsten; Huber, Wolfgang; Iacus, Stefano; Irizarry, Rafael; Leisch, Friedrich; Li, Cheng; Maechler, Martin; Rossini, Anthony J; Sawitzki, Gunther; Smith, Colin; Smyth, Gordon; Tierney, Luke; Yang, Jean YH; Zhang, Jianhua

    2004-01-01

    The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples. PMID:15461798

  7. Measuring the Built Environment for Physical Activity

    PubMed Central

    Brownson, Ross C.; Hoehner, Christine M.; Day, Kristen; Forsyth, Ann; Sallis, James F.

    2009-01-01

    Physical inactivity is one of the most important public health issues in the U.S. and internationally. Increasingly, links are being identified between various elements of the physical—or built—environment and physical activity. To understand the impact of the built environment on physical activity, the development of high-quality measures is essential. Three categories of built environment data are being used: (1) perceived measures obtained by telephone interview or self-administered questionnaires; (2) observational measures obtained using systematic observational methods (audits); and (3) archival data sets that are often layered and analyzed with GIS. This review provides a critical assessment of these three types of built-environment measures relevant to the study of physical activity. Among perceived measures, 19 questionnaires were reviewed, ranging in length from 7 to 68 questions. Twenty audit tools were reviewed that cover community environments (i.e., neighborhoods, cities), parks, and trails. For GIS-derived measures, more than 50 studies were reviewed. A large degree of variability was found in the operationalization of common GIS measures, which include population density, land-use mix, access to recreational facilities, and street pattern. This first comprehensive examination of built-environment measures demonstrates considerable progress over the past decade, showing diverse environmental variables available that use multiple modes of assessment. Most can be considered first-generation measures, so further development is needed. In particular, further research is needed to improve the technical quality of measures, understand the relevance to various population groups, and understand the utility of measures for science and public health. PMID:19285216

  8. The built environment and mental health.

    PubMed

    Evans, Gary W

    2003-12-01

    The built environment has direct and indirect effects on mental health. High-rise housing is inimical to the psychological well-being of women with young children. Poor-quality housing appears to increase psychological distress, but methodological issues make it difficult to draw clear conclusions. Mental health of psychiatric patients has been linked to design elements that affect their ability to regulate social interaction (e.g., furniture configuration, privacy). Alzheimer's patients adjust better to small-scale, homier facilities that also have lower levels of stimulation. They are also better adjusted in buildings that accommodate physical wandering. Residential crowding (number of people per room) and loud exterior noise sources (e.g., airports) elevate psychological distress but do not produce serious mental illness. Malodorous air pollutants heighten negative affect, and some toxins (e.g., lead, solvents) cause behavioral disturbances (e.g., self-regulatory ability, aggression). Insufficient daylight is reliably associated with increased depressive symptoms. Indirectly, the physical environment may influence mental health by altering psychosocial processes with known mental health sequelae. Personal control, socially supportive relationships, and restoration from stress and fatigue are all affected by properties of the built environment. More prospective, longitudinal studies and, where feasible, randomized experiments are needed to examine the potential role of the physical environment in mental health. Even more challenging is the task of developing underlying models of how the built environment can affect mental health. It is also likely that some individuals may be more vulnerable to mental health impacts of the built environment. Because exposure to poor environmental conditions is not randomly distributed and tends to concentrate among the poor and ethnic minorities, we also need to focus more attention on the health implications of multiple

  9. Built-Environment Wind Turbine Roadmap

    SciTech Connect

    Smith, J.; Forsyth, T.; Sinclair, K.; Oteri, F.

    2012-11-01

    Although only a small contributor to total electricity production needs, built-environment wind turbines (BWTs) nonetheless have the potential to influence the public's consideration of renewable energy, and wind energy in particular. Higher population concentrations in urban environments offer greater opportunities for project visibility and an opportunity to acquaint large numbers of people to the advantages of wind projects on a larger scale. However, turbine failures will be equally visible and could have a negative effect on public perception of wind technology. This roadmap provides a framework for achieving the vision set forth by the attendees of the Built-Environment Wind Turbine Workshop on August 11 - 12, 2010, at the U.S. Department of Energy's National Renewable Energy Laboratory. The BWT roadmap outlines the stakeholder actions that could be taken to overcome the barriers identified. The actions are categorized as near-term (0 - 3 years), medium-term (4 - 7 years), and both near- and medium-term (requiring immediate to medium-term effort). To accomplish these actions, a strategic approach was developed that identifies two focus areas: understanding the built-environment wind resource and developing testing and design standards. The authors summarize the expertise and resources required in these areas.

  10. 2. EAST ELEVATION OF IPA FACTORY; TWOSTORY SECTION BUILT IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. EAST ELEVATION OF IPA FACTORY; TWO-STORY SECTION BUILT IN 1892 AND PARTIALLY DESTROYED PARAPET SECTION BUILT CA. 1948. BRICK CHIMNEY ALSO BUILT CA. 1948. - Illinois Pure Aluminum Company, 109 Holmes Street, Lemont, Cook County, IL

  11. One Bedroom Units: Floor Plan, South Elevation (As Built), North ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    One Bedroom Units: Floor Plan, South Elevation (As Built), North Elevation (As Built), Section A-A (As Built), Section AA (Existing) - Aluminum City Terrace, East Hill Drive, New Kensington, Westmoreland County, PA

  12. Bioinformatics Analysis of MAPKKK Family Genes in Medicago truncatula.

    PubMed

    Li, Wei; Xu, Hanyun; Liu, Ying; Song, Lili; Guo, Changhong; Shu, Yongjun

    2016-04-04

    Mitogen-activated protein kinase kinase kinase (MAPKKK) is a component of the MAPK cascade pathway that plays an important role in plant growth, development, and response to abiotic stress, the functions of which have been well characterized in several plant species, such as Arabidopsis, rice, and maize. In this study, we performed genome-wide and systemic bioinformatics analysis of MAPKKK family genes in Medicago truncatula. In total, there were 73 MAPKKK family members identified by search of homologs, and they were classified into three subfamilies, MEKK, ZIK, and RAF. Based on the genomic duplication function, 72 MtMAPKKK genes were located throughout all chromosomes, but they cluster in different chromosomes. Using microarray data and high-throughput sequencing-data, we assessed their expression profiles in growth and development processes; these results provided evidence for exploring their important functions in developmental regulation, especially in the nodulation process. Furthermore, we investigated their expression in abiotic stresses by RNA-seq, which confirmed their critical roles in signal transduction and regulation processes under stress. In summary, our genome-wide, systemic characterization and expressional analysis of MtMAPKKK genes will provide insights that will be useful for characterizing the molecular functions of these genes in M. truncatula.

  13. Bioinformatics Analysis of MAPKKK Family Genes in Medicago truncatula

    PubMed Central

    Li, Wei; Xu, Hanyun; Liu, Ying; Song, Lili; Guo, Changhong; Shu, Yongjun

    2016-01-01

    Mitogen-activated protein kinase kinase kinase (MAPKKK) is a component of the MAPK cascade pathway that plays an important role in plant growth, development, and response to abiotic stress, the functions of which have been well characterized in several plant species, such as Arabidopsis, rice, and maize. In this study, we performed genome-wide and systemic bioinformatics analysis of MAPKKK family genes in Medicago truncatula. In total, there were 73 MAPKKK family members identified by search of homologs, and they were classified into three subfamilies, MEKK, ZIK, and RAF. Based on the genomic duplication function, 72 MtMAPKKK genes were located throughout all chromosomes, but they cluster in different chromosomes. Using microarray data and high-throughput sequencing-data, we assessed their expression profiles in growth and development processes; these results provided evidence for exploring their important functions in developmental regulation, especially in the nodulation process. Furthermore, we investigated their expression in abiotic stresses by RNA-seq, which confirmed their critical roles in signal transduction and regulation processes under stress. In summary, our genome-wide, systemic characterization and expressional analysis of MtMAPKKK genes will provide insights that will be useful for characterizing the molecular functions of these genes in M. truncatula. PMID:27049397

  14. Georgia Power Company solar as-built drawings (Engineering Materials)

    SciTech Connect

    Not Available

    1980-04-30

    This package consists of a set of sepia reproducibles of the Final As-Built document of the Georgia Power Company solar heating and cooling system. The system utilizes 1482 DEL concentrating parabolic trough collectors (23,712 ft/sup 2/). The solar array is designed as a combination of series and parallel circuits circulating water as the heat transfer fluid. The system will displace 18.6% of the heating and cooling energy for the building. Reference DOE/AL/12548--T1.

  15. The MIGenAS integrated bioinformatics toolkit for web-based sequence analysis

    PubMed Central

    Rampp, Markus; Soddemann, Thomas; Lederer, Hermann

    2006-01-01

    We describe a versatile and extensible integrated bioinformatics toolkit for the analysis of biological sequences over the Internet. The web portal offers convenient interactive access to a growing pool of chainable bioinformatics software tools and databases that are centrally installed and maintained by the RZG. Currently, supported tasks comprise sequence similarity searches in public or user-supplied databases, computation and validation of multiple sequence alignments, phylogenetic analysis and protein–structure prediction. Individual tools can be seamlessly chained into pipelines allowing the user to conveniently process complex workflows without the necessity to take care of any format conversions or tedious parsing of intermediate results. The toolkit is part of the Max-Planck Integrated Gene Analysis System (MIGenAS) of the Max Planck Society available at (click ‘Start Toolkit’). PMID:16844980

  16. Standardizing the next generation of bioinformatics software development with BioHDF (HDF5).

    PubMed

    Mason, Christopher E; Zumbo, Paul; Sanders, Stephan; Folk, Mike; Robinson, Dana; Aydt, Ruth; Gollery, Martin; Welsh, Mark; Olson, N Eric; Smith, Todd M

    2010-01-01

    Next Generation Sequencing technologies are limited by the lack of standard bioinformatics infrastructures that can reduce data storage, increase data processing performance, and integrate diverse information. HDF technologies address these requirements and have a long history of use in data-intensive science communities. They include general data file formats, libraries, and tools for working with the data. Compared to emerging standards, such as the SAM/BAM formats, HDF5-based systems demonstrate significantly better scalability, can support multiple indexes, store multiple data types, and are self-describing. For these reasons, HDF5 and its BioHDF extension are well suited for implementing data models to support the next generation of bioinformatics applications. PMID:20865556

  17. Proteomic and bioinformatic analyses of spinal cord injury-induced skeletal muscle atrophy in rats

    PubMed Central

    WEI, ZHI-JIAN; ZHOU, XIAN-HU; FAN, BAO-YOU; LIN, WEI; REN, YI-MING; FENG, SHI-QING

    2016-01-01

    Spinal cord injury (SCI) may result in skeletal muscle atrophy. Identifying diagnostic biomarkers and effective targets for treatment is an important challenge in clinical work. The aim of the present study is to elucidate potential biomarkers and therapeutic targets for SCI-induced muscle atrophy (SIMA) using proteomic and bioinformatic analyses. The protein samples from rat soleus muscle were collected at different time points following SCI injury and separated by two-dimensional gel electrophoresis and compared with the sham group. The identities of these protein spots were analyzed by mass spectrometry (MS). MS demonstrated that 20 proteins associated with muscle atrophy were differentially expressed. Bioinformatic analyses indicated that SIMA changed the expression of proteins associated with cellular, developmental, immune system and metabolic processes, biological adhesion and localization. The results of the present study may be beneficial in understanding the molecular mechanisms of SIMA and elucidating potential biomarkers and targets for the treatment of muscle atrophy. PMID:27177391

  18. Bioinformatics analysis of circulating cell-free DNA sequencing data.

    PubMed

    Chan, Landon L; Jiang, Peiyong

    2015-10-01

    The discovery of cell-free DNA molecules in plasma has opened up numerous opportunities in noninvasive diagnosis. Cell-free DNA molecules have become increasingly recognized as promising biomarkers for detection and management of many diseases. The advent of next generation sequencing has provided unprecedented opportunities to scrutinize the characteristics of cell-free DNA molecules in plasma in a genome-wide fashion and at single-base resolution. Consequently, clinical applications of circulating cell-free DNA analysis have not only revolutionized noninvasive prenatal diagnosis but also facilitated cancer detection and monitoring toward an era of blood-based personalized medicine. With the remarkably increasing throughput and lowering cost of next generation sequencing, bioinformatics analysis becomes increasingly demanding to understand the large amount of data generated by these sequencing platforms. In this Review, we highlight the major bioinformatics algorithms involved in the analysis of cell-free DNA sequencing data. Firstly, we briefly describe the biological properties of these molecules and provide an overview of the general bioinformatics approach for the analysis of cell-free DNA. Then, we discuss the specific upstream bioinformatics considerations concerning the analysis of sequencing data of circulating cell-free DNA, followed by further detailed elaboration on each key clinical situation in noninvasive prenatal diagnosis and cancer management where downstream bioinformatics analysis is heavily involved. We also discuss bioinformatics analysis as well as clinical applications of the newly developed massively parallel bisulfite sequencing of cell-free DNA. Finally, we offer our perspectives on the future development of bioinformatics in noninvasive diagnosis.

  19. From Molecules to Patients: The Clinical Applications of Translational Bioinformatics

    PubMed Central

    Regan, K.

    2015-01-01

    Summary Objective In order to realize the promise of personalized medicine, Translational Bioinformatics (TBI) research will need to continue to address implementation issues across the clinical spectrum. In this review, we aim to evaluate the expanding field of TBI towards clinical applications, and define common themes and current gaps in order to motivate future research. Methods Here we present the state-of-the-art of clinical implementation of TBI-based tools and resources. Our thematic analyses of a targeted literature search of recent TBI-related articles ranged across topics in genomics, data management, hypothesis generation, molecular epidemiology, diagnostics, therapeutics and personalized medicine. Results Open areas of clinically-relevant TBI research identified in this review include developing data standards and best practices, publicly available resources, integrative systems-level approaches, user-friendly tools for clinical support, cloud computing solutions, emerging technologies and means to address pressing legal, ethical and social issues. Conclusions There is a need for further research bridging the gap from foundational TBI-based theories and methodologies to clinical implementation. We have organized the topic themes presented in this review into four conceptual foci – domain analyses, knowledge engineering, computational architectures and computation methods alongside three stages of knowledge development in order to orient future TBI efforts to accelerate the goals of personalized medicine. PMID:26293863

  20. Cell buffer with built-in test

    NASA Technical Reports Server (NTRS)

    Ott, William E. (Inventor)

    2004-01-01

    A cell buffer with built-in testing mechanism is provided. The cell buffer provides the ability to measure voltage provided by a power cell. The testing mechanism provides the ability to test whether the cell buffer is functioning properly and thus providing an accurate voltage measurement. The testing mechanism includes a test signal-provider to provide a test signal to the cell buffer. During normal operation, the test signal is disabled and the cell buffer operates normally. During testing, the test signal is enabled and changes the output of the cell buffer in a defined way. The change in the cell buffer output can then be monitored to determine if the cell buffer is functioning correctly. Specifically, if the voltage output of the cell buffer changes in a way that corresponds to the provided test signal, then the functioning of the cell buffer is confirmed. If the voltage output of the cell buffer does not change correctly, then the cell buffer is known not to be operating correctly. Thus, the built in testing mechanism provides the ability to quickly and accurately determine if the cell buffer is operating correctly. Furthermore, the testing mechanism provides this functionality without requiring excessive device size and complexity.

  1. Knowledge-driven enhancements for task composition in bioinformatics

    PubMed Central

    Sutherland, Karen; McLeod, Kenneth; Ferguson, Gus; Burger, Albert

    2009-01-01

    Background A key application area of semantic technologies is the fast-developing field of bioinformatics. Sealife was a project within this field with the aim of creating semantics-based web browsing capabilities for the Life Sciences. This includes meaningfully linking significant terms from the text of a web page to executable web services. It also involves the semantic mark-up of biological terms, linking them to biomedical ontologies, then discovering and executing services based on terms that interest the user. Results A system was produced which allows a user to identify terms of interest on a web page and subsequently connects these to a choice of web services which can make use of these inputs. Elements of Artificial Intelligence Planning build on this to present a choice of higher level goals, which can then be broken down to construct a workflow. An Argumentation System was implemented to evaluate the results produced by three different gene expression databases. An evaluation of these modules was carried out on users from a variety of backgrounds. Users with little knowledge of web services were able to achieve tasks that used several services in much less time than they would have taken to do this manually. The Argumentation System was also considered a useful resource and feedback was collected on the best way to present results. Conclusion Overall the system represents a move forward in helping users to both construct workflows and analyse results by incorporating specific domain knowledge into the software. It also provides a mechanism by which web pages can be linked to web services. However, this work covers a specific domain and much co-ordinated effort is needed to make all web services available for use in such a way, i.e. the integration of underlying knowledge is a difficult but essential task. PMID:19796396

  2. Vignettes: diverse library staff offering diverse bioinformatics services*

    PubMed Central

    Osterbur, David L.; Alpi, Kristine; Canevari, Catharine; Corley, Pamela M.; Devare, Medha; Gaedeke, Nicola; Jacobs, Donna K.; Kirlew, Peter; Ohles, Janet A.; Vaughan, K.T.L.; Wang, Lili; Wu, Yongchun; Geer, Renata C.

    2006-01-01

    Objectives: The paper gives examples of the bioinformatics services provided in a variety of different libraries by librarians with a broad range of educational background and training. Methods: Two investigators sent an email inquiry to attendees of the “National Center for Biotechnology Information's (NCBI) Introduction to Molecular Biology Information Resources” or “NCBI Advanced Workshop for Bioinformatics Information Specialists (NAWBIS)” courses. The thirty-five-item questionnaire addressed areas such as educational background, library setting, types and numbers of users served, and bioinformatics training and support services provided. Answers were compiled into program vignettes. Discussion: The bioinformatics support services addressed in the paper are based in libraries with academic and clinical settings. Services have been established through different means: in collaboration with biology faculty as part of formal courses, through teaching workshops in the library, through one-on-one consultations, and by other methods. Librarians with backgrounds from art history to doctoral degrees in genetics have worked to establish these programs. Conclusion: Successful bioinformatics support programs can be established in libraries in a variety of different settings and by staff with a variety of different backgrounds and approaches. PMID:16888664

  3. Delivering bioinformatics training: bridging the gaps between computer science and biomedicine.

    PubMed Central

    Dubay, Christopher; Brundege, James M.; Hersh, William; Spackman, Kent

    2002-01-01

    Biomedical researchers have always sought innovative methodologies to elucidate the underlying biology in their experimental models. As the pace of research has increased with new technologies that 'scale-up' these experiments, researchers have developed acute needs for the information technologies which assist them in managing and processing their experiments and results into useful data analyses that support scientific discovery. The application of information technology to support this discovery process is often called bioinformatics. We have observed a 'gap' in the training of those individuals who traditionally aid in the delivery of information technology at the level of the end-user (e.g. a systems analyst working with a biomedical researcher) which can negatively impact the successful application of technological solutions to biomedical research problems. In this paper we describe the roots and branches of bioinformatics to illustrate a range of applications and technologies that it encompasses. We then propose a taxonomy of bioinformatics as a framework for the identification of skills employed in the field. The taxonomy can be used to assess a set of skills required by a student to traverse this hierarchy from one area to another. We then describe a curriculum that attempts to deliver the identified skills to a broad audience of participants, and describe our experiences with the curriculum to show how it can help bridge the 'gap'. PMID:12463819

  4. Mudi, a web tool for identifying mutations by bioinformatics analysis of whole-genome sequence.

    PubMed

    Iida, Naoko; Yamao, Fumiaki; Nakamura, Yasukazu; Iida, Tetsushi

    2014-06-01

    In forward genetics, identification of mutations is a time-consuming and laborious process. Modern whole-genome sequencing, coupled with bioinformatics analysis, has enabled fast and cost-effective mutation identification. However, for many experimental researchers, bioinformatics analysis is still a difficult aspect of whole-genome sequencing. To address this issue, we developed a browser-accessible and easy-to-use bioinformatics tool called Mutation discovery (Mudi; http://naoii.nig.ac.jp/mudi_top.html), which enables 'one-click' identification of causative mutations from whole-genome sequence data. In this study, we optimized Mudi for pooled-linkage analysis aimed at identifying mutants in yeast model systems. After raw sequencing data are uploaded, Mudi performs sequential analysis, including mapping, detection of variant alleles, filtering and removal of background polymorphisms, prioritization, and annotation. In an example study of suppressor mutants of ptr1-1 in the fission yeast Schizosaccharomyces pombe, pooled-linkage analysis with Mudi identified mip1(+) , a component of Target of Rapamycin Complex 1 (TORC1), as a novel component involved in RNA interference (RNAi)-related cell-cycle control. The accessibility of Mudi will accelerate systematic mutation analysis in forward genetics.

  5. Delivering bioinformatics training: bridging the gaps between computer science and biomedicine.

    PubMed

    Dubay, Christopher; Brundege, James M; Hersh, William; Spackman, Kent

    2002-01-01

    Biomedical researchers have always sought innovative methodologies to elucidate the underlying biology in their experimental models. As the pace of research has increased with new technologies that 'scale-up' these experiments, researchers have developed acute needs for the information technologies which assist them in managing and processing their experiments and results into useful data analyses that support scientific discovery. The application of information technology to support this discovery process is often called bioinformatics. We have observed a 'gap' in the training of those individuals who traditionally aid in the delivery of information technology at the level of the end-user (e.g. a systems analyst working with a biomedical researcher) which can negatively impact the successful application of technological solutions to biomedical research problems. In this paper we describe the roots and branches of bioinformatics to illustrate a range of applications and technologies that it encompasses. We then propose a taxonomy of bioinformatics as a framework for the identification of skills employed in the field. The taxonomy can be used to assess a set of skills required by a student to traverse this hierarchy from one area to another. We then describe a curriculum that attempts to deliver the identified skills to a broad audience of participants, and describe our experiences with the curriculum to show how it can help bridge the 'gap'.

  6. Tester-assisted built in test

    NASA Astrophysics Data System (ADS)

    Guntheroth, Kurt

    It is noted that board makers invest considerable time and money writing extensive self-tests and that this investment can be multiplied by selecting ATE (automatic test equipment) that complements and extends the power of the self-test. The tester can diagnose boards in situations where a fault prevents the self-test from running. If the tester monitors such resources as processor, memory, and I/O, confidence in test results is improved. The tester can be used during development of the self-test and to turn on prototypes before the self-test is complete. The author argues that emulative functional testers outperform other types of ATE on boards with BIST (built-in self-test) and lists features of emulative functional testers that are most important to users of BIST.

  7. The Large Built Water Clock Of Amphiaraeion.

    NASA Astrophysics Data System (ADS)

    Theodossiou, E.; Katsiotis, M.; Manimanis, V. N.; Mantarakis, P.

    A very well preserved ancient water clock was discovered during excavations at the Amphiaraeion, in Oropos, Greece. The Amphiaraeion, a famous religious and oracle center of the deified healer Amphiaraus, was active from the pre-classic period until the replacement of the ancient religion by Christianity in the 5th Century A.D.. The foretelling was supposedly done through dreams sent by the god to the believers sleeping in a special gallery. In these dreams the god suggesting to them the therapy for their illness or the solution to their problems. The patients, then threw coins into a spring of the sanctuary. In such a place, the measurement of time was a necessity. Therefore, time was kept with both a conical sundial and a water clock in the form of a fountain. According to archeologists, the large built structure that measured the time for the sanctuary dates from the 4th Century B.C.

  8. Developing expertise in bioinformatics for biomedical research in Africa

    PubMed Central

    Karikari, Thomas K.; Quansah, Emmanuel; Mohamed, Wael M.Y.

    2015-01-01

    Research in bioinformatics has a central role in helping to advance biomedical research. However, its introduction to Africa has been met with some challenges (such as inadequate infrastructure, training opportunities, research funding, human resources, biorepositories and databases) that have contributed to the slow pace of development in this field across the continent. Fortunately, recent improvements in areas such as research funding, infrastructural support and capacity building are helping to develop bioinformatics into an important discipline in Africa. These contributions are leading to the establishment of world-class research facilities, biorepositories, training programmes, scientific networks and funding schemes to improve studies into disease and health in Africa. With increased contribution from all stakeholders, these developments could be further enhanced. Here, we discuss how the recent developments are contributing to the advancement of bioinformatics in Africa. PMID:26767162

  9. Bioinformatic Identification of Conserved Cis-Sequences in Coregulated Genes.

    PubMed

    Bülow, Lorenz; Hehl, Reinhard

    2016-01-01

    Bioinformatics tools can be employed to identify conserved cis-sequences in sets of coregulated plant genes because more and more gene expression and genomic sequence data become available. Knowledge on the specific cis-sequences, their enrichment and arrangement within promoters, facilitates the design of functional synthetic plant promoters that are responsive to specific stresses. The present chapter illustrates an example for the bioinformatic identification of conserved Arabidopsis thaliana cis-sequences enriched in drought stress-responsive genes. This workflow can be applied for the identification of cis-sequences in any sets of coregulated genes. The workflow includes detailed protocols to determine sets of coregulated genes, to extract the corresponding promoter sequences, and how to install and run a software package to identify overrepresented motifs. Further bioinformatic analyses that can be performed with the results are discussed. PMID:27557771

  10. Embracing the Future: Bioinformatics for High School Women

    NASA Astrophysics Data System (ADS)

    Zales, Charlotte Rappe; Cronin, Susan J.

    Sixteen high school women participated in a 5-week residential summer program designed to encourage female and minority students to choose careers in scientific fields. Students gained expertise in bioinformatics through problem-based learning in a complex learning environment of content instruction, speakers, labs, and trips. Innovative hands-on activities filled the program. Students learned biological principles in context and sophisticated bioinformatics tools for processing data. Students additionally mastered a variety of information-searching techniques. Students completed creative individual and group projects, demonstrating the successful integration of biology, information technology, and bioinformatics. Discussions with female scientists allowed students to see themselves in similar roles. Summer residential aspects fostered an atmosphere in which students matured in interacting with others and in their views of diversity.

  11. GOBLET: the Global Organisation for Bioinformatics Learning, Education and Training.

    PubMed

    Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G

    2015-04-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.

  12. Overview of commonly used bioinformatics methods and their applications.

    PubMed

    Kapetanovic, Izet M; Rosenfeld, Simon; Izmirlian, Grant

    2004-05-01

    Bioinformatics, in its broad sense, involves application of computer processes to solve biological problems. A wide range of computational tools are needed to effectively and efficiently process large amounts of data being generated as a result of recent technological innovations in biology and medicine. A number of computational tools have been developed or adapted to deal with the experimental riches of complex and multivariate data and transition from data collection to information or knowledge. These include a wide variety of clustering and classification algorithms, including self-organized maps (SOM), artificial neural networks (ANN), support vector machines (SVM), fuzzy logic, and even hyphenated techniques as neuro-fuzzy networks. These bioinformatics tools are being evaluated and applied in various medical areas including early detection, risk assessment, classification, and prognosis of cancer. The goal of these efforts is to develop and identify bioinformatics methods with optimal sensitivity, specificity, and predictive capabilities.

  13. Photocatalytic oxide films in the built environment

    NASA Astrophysics Data System (ADS)

    Österlund, Lars; Topalian, Zareh

    2014-11-01

    The possibility to increase human comfort in buildings is a powerful driving force for the introduction of new technology. Among other things our sense of comfort depends on air quality, temperature, lighting level, and the possibility of having visual contact between indoors and outdoors. Indeed there is an intimate connection between energy, comfort, and health issues in the built environment, leading to a need for intelligent building materials and green architecture. Photocatalytic materials can be applied as coatings, filters, and be embedded in building materials to provide self-cleaning, antibacterial, air cleaning, deodorizing, and water cleaning functions utilizing either solar light or artificial illumination sources - either already present in buildings, or by purposefully designed luminaries. Huge improvements in indoor comfort can thus be made, and also alleviate negative health effects associated with buildings, such as the sick-house syndrome. At the same time huge cost savings can be made by reducing maintenance costs. Photocatalytic oxides can be chemically modified by changing their acid-base surface properties, which can be used to overcome deactivation problems commonly encountered for TiO2 in air cleaning applications. In addition, the wetting properties of oxides can be tailored by surface chemical modifications and thus be made e.g. oleophobic and water repellent. Here we show results of surface acid modified TiO2 coatings on various substrates by means of photo-fixation of surface sulfate species by a method invented in our group. In particular, we show that such surface treatments of photocatalytic concrete made by mixing TiO2 nanoparticles in reactive concrete powders result in concrete surfaces with beneficial self-cleaning properties. We propose that such approaches are feasible for a number of applications in the built environment, including glass, tiles, sheet metals, plastics, etc.

  14. Sequence database versioning for command line and Galaxy bioinformatics servers

    PubMed Central

    Dooley, Damion M.; Petkau, Aaron J.; Van Domselaar, Gary; Hsiao, William W.L.

    2016-01-01

    Motivation: There are various reasons for rerunning bioinformatics tools and pipelines on sequencing data, including reproducing a past result, validation of a new tool or workflow using a known dataset, or tracking the impact of database changes. For identical results to be achieved, regularly updated reference sequence databases must be versioned and archived. Database administrators have tried to fill the requirements by supplying users with one-off versions of databases, but these are time consuming to set up and are inconsistent across resources. Disk storage and data backup performance has also discouraged maintaining multiple versions of databases since databases such as NCBI nr can consume 50 Gb or more disk space per version, with growth rates that parallel Moore's law. Results: Our end-to-end solution combines our own Kipper software package—a simple key-value large file versioning system—with BioMAJ (software for downloading sequence databases), and Galaxy (a web-based bioinformatics data processing platform). Available versions of databases can be recalled and used by command-line and Galaxy users. The Kipper data store format makes publishing curated FASTA databases convenient since in most cases it can store a range of versions into a file marginally larger than the size of the latest version. Availability and implementation: Kipper v1.0.0 and the Galaxy Versioned Data tool are written in Python and released as free and open source software available at https://github.com/Public-Health-Bioinformatics/kipper and https://github.com/Public-Health-Bioinformatics/versioned_data, respectively; detailed setup instructions can be found at https://github.com/Public-Health-Bioinformatics/versioned_data/blob/master/doc/setup.md Contact: Damion.Dooley@Bccdc.Ca or William.Hsiao@Bccdc.Ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26656932

  15. Bioinformatic scaling of allosteric interactions in biomedical isozymes

    NASA Astrophysics Data System (ADS)

    Phillips, J. C.

    2016-09-01

    Allosteric (long-range) interactions can be surprisingly strong in proteins of biomedical interest. Here we use bioinformatic scaling to connect prior results on nonsteroidal anti-inflammatory drugs to promising new drugs that inhibit cancer cell metabolism. Many parallel features are apparent, which explain how even one amino acid mutation, remote from active sites, can alter medical results. The enzyme twins involved are cyclooxygenase (aspirin) and isocitrate dehydrogenase (IDH). The IDH results are accurate to 1% and are overdetermined by adjusting a single bioinformatic scaling parameter. It appears that the final stage in optimizing protein functionality may involve leveling of the hydrophobic limits of the arms of conformational hydrophilic hinges.

  16. Genomics and bioinformatics resources for translational science in Rosaceae.

    PubMed

    Jung, Sook; Main, Dorrie

    2014-01-01

    Recent technological advances in biology promise unprecedented opportunities for rapid and sustainable advancement of crop quality. Following this trend, the Rosaceae research community continues to generate large amounts of genomic, genetic and breeding data. These include annotated whole genome sequences, transcriptome and expression data, proteomic and metabolomic data, genotypic and phenotypic data, and genetic and physical maps. Analysis, storage, integration and dissemination of these data using bioinformatics tools and databases are essential to provide utility of the data for basic, translational and applied research. This review discusses the currently available genomics and bioinformatics resources for the Rosaceae family.

  17. Bioinformatics analyses of Shigella CRISPR structure and spacer classification.

    PubMed

    Wang, Pengfei; Zhang, Bing; Duan, Guangcai; Wang, Yingfang; Hong, Lijuan; Wang, Linlin; Guo, Xiangjiao; Xi, Yuanlin; Yang, Haiyan

    2016-03-01

    Clustered regularly interspaced short palindromic repeats (CRISPR) are inheritable genetic elements of a variety of archaea and bacteria and indicative of the bacterial ecological adaptation, conferring acquired immunity against invading foreign nucleic acids. Shigella is an important pathogen for anthroponosis. This study aimed to analyze the features of Shigella CRISPR structure and classify the spacers through bioinformatics approach. Among 107 Shigella, 434 CRISPR structure loci were identified with two to seven loci in different strains. CRISPR-Q1, CRISPR-Q4 and CRISPR-Q5 were widely distributed in Shigella strains. Comparison of the first and last repeats of CRISPR1, CRISPR2 and CRISPR3 revealed several base variants and different stem-loop structures. A total of 259 cas genes were found among these 107 Shigella strains. The cas gene deletions were discovered in 88 strains. However, there is one strain that does not contain cas gene. Intact clusters of cas genes were found in 19 strains. From comprehensive analysis of sequence signature and BLAST and CRISPRTarget score, the 708 spacers were classified into three subtypes: Type I, Type II and Type III. Of them, Type I spacer referred to those linked with one gene segment, Type II spacer linked with two or more different gene segments, and Type III spacer undefined. This study examined the diversity of CRISPR/cas system in Shigella strains, demonstrated the main features of CRISPR structure and spacer classification, which provided critical information for elucidation of the mechanisms of spacer formation and exploration of the role the spacers play in the function of the CRISPR/cas system.

  18. Endolithic phototrophs in built and natural stone.

    PubMed

    Gaylarde, Christine C; Gaylarde, Peter M; Neilan, Brett A

    2012-08-01

    Lichens, algae and cyanobacteria have been detected growing endolithically in natural rock and in stone buildings in various countries of Australasia, Europe and Latin America. Previously these organisms had mainly been described in natural carbonaceous rocks in aquatic environments, with some reports in siliceous rocks, principally from extremophilic regions. Using various culture and microscopy methods, we have detected endoliths in siliceous stone, both natural and cut, in humid temperate and subtropical climates. Such endolithic growth leads to degradation of the stone structure, not only by mechanical means, but also by metabolites liberated by the cells. Using in vitro culture, transmission, optical and fluorescence microscopy, and confocal laser scanning microscopy, both coccoid and filamentous cyanobacteria and algae, including Cyanidiales, have been identified growing endolithically in the facades of historic buildings built from limestone, sandstone, granite, basalt and soapstone, as well as in some natural rocks. Numerically, the most abundant are small, single-celled, colonial cyanobacteria. These small phototrophs are difficult to detect by standard microscope techniques and some of these species have not been previously reported within stone.

  19. Design and bioinformatics analysis of genome-wide CLIP experiments

    PubMed Central

    Wang, Tao; Xiao, Guanghua; Chu, Yongjun; Zhang, Michael Q.; Corey, David R.; Xie, Yang

    2015-01-01

    The past decades have witnessed a surge of discoveries revealing RNA regulation as a central player in cellular processes. RNAs are regulated by RNA-binding proteins (RBPs) at all post-transcriptional stages, including splicing, transportation, stabilization and translation. Defects in the functions of these RBPs underlie a broad spectrum of human pathologies. Systematic identification of RBP functional targets is among the key biomedical research questions and provides a new direction for drug discovery. The advent of cross-linking immunoprecipitation coupled with high-throughput sequencing (genome-wide CLIP) technology has recently enabled the investigation of genome-wide RBP–RNA binding at single base-pair resolution. This technology has evolved through the development of three distinct versions: HITS-CLIP, PAR-CLIP and iCLIP. Meanwhile, numerous bioinformatics pipelines for handling the genome-wide CLIP data have also been developed. In this review, we discuss the genome-wide CLIP technology and focus on bioinformatics analysis. Specifically, we compare the strengths and weaknesses, as well as the scopes, of various bioinformatics tools. To assist readers in choosing optimal procedures for their analysis, we also review experimental design and procedures that affect bioinformatics analyses. PMID:25958398

  20. Bioinformatics Education—Perspectives and Challenges out of Africa

    PubMed Central

    Adebiyi, Ezekiel F.; Alzohairy, Ahmed M.; Everett, Dean; Ghedira, Kais; Ghouila, Amel; Kumuthini, Judit; Mulder, Nicola J.; Panji, Sumir; Patterton, Hugh-G.

    2015-01-01

    The discipline of bioinformatics has developed rapidly since the complete sequencing of the first genomes in the 1990s. The development of many high-throughput techniques during the last decades has ensured that bioinformatics has grown into a discipline that overlaps with, and is required for, the modern practice of virtually every field in the life sciences. This has placed a scientific premium on the availability of skilled bioinformaticians, a qualification that is extremely scarce on the African continent. The reasons for this are numerous, although the absence of a skilled bioinformatician at academic institutions to initiate a training process and build sustained capacity seems to be a common African shortcoming. This dearth of bioinformatics expertise has had a knock-on effect on the establishment of many modern high-throughput projects at African institutes, including the comprehensive and systematic analysis of genomes from African populations, which are among the most genetically diverse anywhere on the planet. Recent funding initiatives from the National Institutes of Health and the Wellcome Trust are aimed at ameliorating this shortcoming. In this paper, we discuss the problems that have limited the establishment of the bioinformatics field in Africa, as well as propose specific actions that will help with the education and training of bioinformaticians on the continent. This is an absolute requirement in anticipation of a boom in high-throughput approaches to human health issues unique to data from African populations. PMID:24990350

  1. Technical phosphoproteomic and bioinformatic tools useful in cancer research

    PubMed Central

    2011-01-01

    Reversible protein phosphorylation is one of the most important forms of cellular regulation. Thus, phosphoproteomic analysis of protein phosphorylation in cells is a powerful tool to evaluate cell functional status. The importance of protein kinase-regulated signal transduction pathways in human cancer has led to the development of drugs that inhibit protein kinases at the apex or intermediary levels of these pathways. Phosphoproteomic analysis of these signalling pathways will provide important insights for operation and connectivity of these pathways to facilitate identification of the best targets for cancer therapies. Enrichment of phosphorylated proteins or peptides from tissue or bodily fluid samples is required. The application of technologies such as phosphoenrichments, mass spectrometry (MS) coupled to bioinformatics tools is crucial for the identification and quantification of protein phosphorylation sites for advancing in such relevant clinical research. A combination of different phosphopeptide enrichments, quantitative techniques and bioinformatic tools is necessary to achieve good phospho-regulation data and good structural analysis of protein studies. The current and most useful proteomics and bioinformatics techniques will be explained with research examples. Our aim in this article is to be helpful for cancer research via detailing proteomics and bioinformatic tools. PMID:21967744

  2. An International Bioinformatics Infrastructure to Underpin the Arabidopsis Community

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The future bioinformatics needs of the Arabidopsis community as well as those of other scientific communities that depend on Arabidopsis resources were discussed at a pair of recent meetings held by the Multinational Arabidopsis Steering Committee (MASC) and the North American Arabidopsis Steering C...

  3. Broad issues to consider for library involvement in bioinformatics*

    PubMed Central

    Geer, Renata C.

    2006-01-01

    Background: The information landscape in biological and medical research has grown far beyond literature to include a wide variety of databases generated by research fields such as molecular biology and genomics. The traditional role of libraries to collect, organize, and provide access to information can expand naturally to encompass these new data domains. Methods: This paper discusses the current and potential role of libraries in bioinformatics using empirical evidence and experience from eleven years of work in user services at the National Center for Biotechnology Information. Findings: Medical and science libraries over the last decade have begun to establish educational and support programs to address the challenges users face in the effective and efficient use of a plethora of molecular biology databases and retrieval and analysis tools. As more libraries begin to establish a role in this area, the issues they face include assessment of user needs and skills, identification of existing services, development of plans for new services, recruitment and training of specialized staff, and establishment of collaborations with bioinformatics centers at their institutions. Conclusions: Increasing library involvement in bioinformatics can help address information needs of a broad range of students, researchers, and clinicians and ultimately help realize the power of bioinformatics resources in making new biological discoveries. PMID:16888662

  4. Robust enzyme design: bioinformatic tools for improved protein stability.

    PubMed

    Suplatov, Dmitry; Voevodin, Vladimir; Švedas, Vytas

    2015-03-01

    The ability of proteins and enzymes to maintain a functionally active conformation under adverse environmental conditions is an important feature of biocatalysts, vaccines, and biopharmaceutical proteins. From an evolutionary perspective, robust stability of proteins improves their biological fitness and allows for further optimization. Viewed from an industrial perspective, enzyme stability is crucial for the practical application of enzymes under the required reaction conditions. In this review, we analyze bioinformatic-driven strategies that are used to predict structural changes that can be applied to wild type proteins in order to produce more stable variants. The most commonly employed techniques can be classified into stochastic approaches, empirical or systematic rational design strategies, and design of chimeric proteins. We conclude that bioinformatic analysis can be efficiently used to study large protein superfamilies systematically as well as to predict particular structural changes which increase enzyme stability. Evolution has created a diversity of protein properties that are encoded in genomic sequences and structural data. Bioinformatics has the power to uncover this evolutionary code and provide a reproducible selection of hotspots - key residues to be mutated in order to produce more stable and functionally diverse proteins and enzymes. Further development of systematic bioinformatic procedures is needed to organize and analyze sequences and structures of proteins within large superfamilies and to link them to function, as well as to provide knowledge-based predictions for experimental evaluation.

  5. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    PubMed

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-01

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries. PMID:26510693

  6. Intrageneric Primer Design: Bringing Bioinformatics Tools to the Class

    ERIC Educational Resources Information Center

    Lima, Andre O. S.; Garces, Sergio P. S.

    2006-01-01

    Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…

  7. Learning Genetics through an Authentic Research Simulation in Bioinformatics

    ERIC Educational Resources Information Center

    Gelbart, Hadas; Yarden, Anat

    2006-01-01

    Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…

  8. Bioinformatics for Undergraduates: Steps toward a Quantitative Bioscience Curriculum

    ERIC Educational Resources Information Center

    Chapman, Barbara S.; Christmann, James L.; Thatcher, Eileen F.

    2006-01-01

    We describe an innovative bioinformatics course developed under grants from the National Science Foundation and the California State University Program in Research and Education in Biotechnology for undergraduate biology students. The project has been part of a continuing effort to offer students classroom experiences focused on principles and…

  9. Technical phosphoproteomic and bioinformatic tools useful in cancer research.

    PubMed

    López, Elena; Wesselink, Jan-Jaap; López, Isabel; Mendieta, Jesús; Gómez-Puertas, Paulino; Muñoz, Sarbelio Rodríguez

    2011-01-01

    Reversible protein phosphorylation is one of the most important forms of cellular regulation. Thus, phosphoproteomic analysis of protein phosphorylation in cells is a powerful tool to evaluate cell functional status. The importance of protein kinase-regulated signal transduction pathways in human cancer has led to the development of drugs that inhibit protein kinases at the apex or intermediary levels of these pathways. Phosphoproteomic analysis of these signalling pathways will provide important insights for operation and connectivity of these pathways to facilitate identification of the best targets for cancer therapies. Enrichment of phosphorylated proteins or peptides from tissue or bodily fluid samples is required. The application of technologies such as phosphoenrichments, mass spectrometry (MS) coupled to bioinformatics tools is crucial for the identification and quantification of protein phosphorylation sites for advancing in such relevant clinical research. A combination of different phosphopeptide enrichments, quantitative techniques and bioinformatic tools is necessary to achieve good phospho-regulation data and good structural analysis of protein studies. The current and most useful proteomics and bioinformatics techniques will be explained with research examples. Our aim in this article is to be helpful for cancer research via detailing proteomics and bioinformatic tools. PMID:21967744

  10. A BIOINFORMATIC STRATEGY TO RAPIDLY CHARACTERIZE CDNA LIBRARIES

    EPA Science Inventory

    A Bioinformatic Strategy to Rapidly Characterize cDNA Libraries

    G. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.
    1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...

  11. An evaluation of ontology exchange languages for bioinformatics.

    PubMed

    McEntire, R; Karp, P; Abernethy, N; Benton, D; Helt, G; DeJongh, M; Kent, R; Kosky, A; Lewis, S; Hodnett, D; Neumann, E; Olken, F; Pathak, D; Tarczy-Hornoch, P; Toldo, L; Topaloglou, T

    2000-01-01

    Ontologies are specifications of the concepts in a given field, and of the relationships among those concepts. The development of ontologies for molecular-biology information and the sharing of those ontologies within the bioinformatics community are central problems in bioinformatics. If the bioinformatics community is to share ontologies effectively, ontologies must be exchanged in a form that uses standardized syntax and semantics. This paper reports on an effort among the authors to evaluate alternative ontology-exchange languages, and to recommend one or more languages for use within the larger bioinformatics community. The study selected a set of candidate languages, and defined a set of capabilities that the ideal ontology-exchange language should satisfy. The study scored the languages according to the degree to which they satisfied each capability. In addition, the authors performed several ontology-exchange experiments with the two languages that received the highest scores: OML and Ontolingua. The result of those experiments, and the main conclusion of this study, was that the frame-based semantic model of Ontolingua is preferable to the conceptual graph model of OML, but that the XML-based syntax of OML is preferable to the Lisp-based syntax of Ontolingua. PMID:10977085

  12. OILing the way to machine understandable bioinformatics resources.

    PubMed

    Stevens, Robert; Goble, Carole; Horrocks, Ian; Bechhofer, Sean

    2002-06-01

    The complex questions and analyses posed by biologists, as well as the diverse data resources they develop, require the fusion of evidence from different, independently developed, and heterogeneous resources. The web, as an enabler for interoperability, has been an excellent mechanism for data publication and transportation. Successful exchange and integration of information, however, depends on a shared language for communication (a terminology) and a shared understanding of what the data means (an ontology). Without this kind of understanding, semantic heterogeneity remains a problem for both humans and machines. One means of dealing with heterogeneity in bioinformatics resources is through terminology founded upon an ontology. Bioinformatics resources tend to be rich in human readable and understandable annotation, with each resource using its own terminology. These resources are machine readable, but not machine understandable. Ontologies have a role in increasing this machine understanding, reducing the semantic heterogeneity between resources and thus promoting the flexible and reliable interoperation of bioinformatics resources. This paper describes a solution derived from the semantic web [a machine understandable world-wide web (WWW)], the ontology inference layer (OIL), as a solution for semantic bioinformatics resources. The nature of the heterogeneity problems are presented along with a description of how metadata from domain ontologies can be used to alleviate this problem. A companion paper in this issue gives an example of the development of a bio-ontology using OIL.

  13. Incorporation of Bioinformatics Exercises into the Undergraduate Biochemistry Curriculum

    ERIC Educational Resources Information Center

    Feig, Andrew L.; Jabri, Evelyn

    2002-01-01

    The field of bioinformatics is developing faster than most biochemistry textbooks can adapt. Supplementing the undergraduate biochemistry curriculum with data-mining exercises is an ideal way to expose the students to the common databases and tools that take advantage of this vast repository of biochemical information. An integrated collection of…

  14. Candidate genes for nicotine dependence via linkage, epistasis, and bioinformatics.

    PubMed

    Sullivan, Patrick F; Neale, Benjamin M; van den Oord, Edwin; Miles, Michael F; Neale, Michael C; Bulik, Cynthia M; Joyce, Peter R; Straub, Richard E; Kendler, Kenneth S

    2004-04-01

    Many smoking-related phenotypes are substantially heritable. One genome scan of nicotine dependence (ND) has been published and several others are in progress and should be completed in the next 5 years. The goal of this hypothesis-generating study was two-fold. First, we present further analyses of our genome scan data for ND published by Straub et al. [1999: Mol Psychiatry 4:129-144] (PMID: 10208445). Second, we used the method described by Cox et al. [1999: Nat Genet 21:213-215] (PMID: 9988276) to search for epistatic loci across the markers used in the genome scan. The overall results of the genome scan nearly reached the rigorous Lander and Kruglyak [1995: Nat Genet 11:241-247] criteria for "significant" linkage with the best findings on chromosomes 10 and 2. We then looked for correspondence between genes located in the 10 regions implicated in affected sibling pair (ASP) and epistatic linkage analyses with a list of genes suggested by microarray studies of experimental nicotine exposure and candidate genes from the literature. We found correspondence between linkage and microarray/candidate gene studies for genes involved with the mitogen-activated protein kinase (MAPK) signaling system, nuclear factor kappa B (NFKB) complex, neuropeptide Y (NPY) neurotransmission, a nicotinic receptor subunit (CHRNA2), the vesicular monoamine transporter (SLC18A2), genes in pathways implicated in human anxiety (HTR7, TDO2, and the endozepine-related protein precursor, DKFZP434A2417), and the micro 1-opioid receptor (OPRM1). Although the hypotheses resulting from these linkage and bioinformatic analyses are plausible and intriguing, their ultimate worth depends on replication in additional linkage samples and in future experimental studies. PMID:15048644

  15. Automatic Discovery and Inferencing of Complex Bioinformatics Web Interfaces

    SciTech Connect

    Ngu, A; Rocco, D; Critchlow, T; Buttler, D

    2003-12-22

    The World Wide Web provides a vast resource to genomics researchers in the form of web-based access to distributed data sources--e.g. BLAST sequence homology search interfaces. However, the process for seeking the desired scientific information is still very tedious and frustrating. While there are several known servers on genomic data (e.g., GeneBank, EMBL, NCBI), that are shared and accessed frequently, new data sources are created each day in laboratories all over the world. The sharing of these newly discovered genomics results are hindered by the lack of a common interface or data exchange mechanism. Moreover, the number of autonomous genomics sources and their rate of change out-pace the speed at which they can be manually identified, meaning that the available data is not being utilized to its full potential. An automated system that can find, classify, describe and wrap new sources without tedious and low-level coding of source specific wrappers is needed to assist scientists to access to hundreds of dynamically changing bioinformatics web data sources through a single interface. A correct classification of any kind of Web data source must address both the capability of the source and the conversation/interaction semantics which is inherent in the design of the Web data source. In this paper, we propose an automatic approach to classify Web data sources that takes into account both the capability and the conversational semantics of the source. The ability to discover the interaction pattern of a Web source leads to increased accuracy in the classification process. At the same time, it facilitates the extraction of process semantics, which is necessary for the automatic generation of wrappers that can interact correctly with the sources.

  16. Sequestering CO2 in the Built Environment

    NASA Astrophysics Data System (ADS)

    Constantz, B. R.

    2009-12-01

    Calera’s Carbonate Mineralization by Aqueous Precipitation (CMAP) technology with beneficial reuse has been called, “game-changing” by Carl Pope, Director of the Sierra Club. Calera offers a solution to the scale of the carbon problem. By capturing carbon into the built environment through carbonate mineralization, Calera provides a sound and cost-effective alternative to Geologic Sequestration and Terrestrial Sequestration. The CMAP technology permanently converts carbon dioxide into a mineral form that can be stored above ground, or used as a building material. The process produces a suite of carbonate-containing minerals of various polymorphic forms. Calera product can be substituted into blends with ordinary Portland cements and used as aggregate to produce concrete with reduced carbon, carbon neutral, or carbon negative footprints. For each ton of product produced, approximately half a ton of carbon dioxide can be sequestered using the Calera process. Coal and natural gas are composed of predominately istopically light carbon, as the carbon in the fuel is plant-derived. Thus, power plant CO2 emissions have relatively low δ13C values.The carbon species throughout the CMAP process are identified through measuring the inorganic carbon content, δ13C values of the dissolved carbonate species, and the product carbonate minerals. Measuring δ13C allows for tracking the flue gas CO2 throughout the capture process. Initial analysis of the capture of propane flue gas (δ13C ˜ -25 ‰) with seawater (δ13C ˜ -10 ‰) and industrial brucite tailings from a retired magnesium oxide plant in Moss Landing, CA (δ13C ˜ -7 ‰ from residual calcite) produced carbonate mineral products with a δ13C value of ˜ -20 ‰. This isotopically light carbon, transformed from flue gas to stable carbonate minerals, can be transferred and tracked through the capture process, and finally to the built environment. CMAP provides an economical solution to global warming by producing

  17. Sources of airborne microorganisms in the built environment.

    PubMed

    Prussin, Aaron J; Marr, Linsey C

    2015-01-01

    Each day people are exposed to millions of bioaerosols, including whole microorganisms, which can have both beneficial and detrimental effects. The next chapter in understanding the airborne microbiome of the built environment is characterizing the various sources of airborne microorganisms and the relative contribution of each. We have identified the following eight major categories of sources of airborne bacteria, viruses, and fungi in the built environment: humans; pets; plants; plumbing systems; heating, ventilation, and air-conditioning systems; mold; dust resuspension; and the outdoor environment. Certain species are associated with certain sources, but the full potential of source characterization and source apportionment has not yet been realized. Ideally, future studies will quantify detailed emission rates of microorganisms from each source and will identify the relative contribution of each source to the indoor air microbiome. This information could then be used to probe fundamental relationships between specific sources and human health, to design interventions to improve building health and human health, or even to provide evidence for forensic investigations. PMID:26694197

  18. Sources of airborne microorganisms in the built environment.

    PubMed

    Prussin, Aaron J; Marr, Linsey C

    2015-12-22

    Each day people are exposed to millions of bioaerosols, including whole microorganisms, which can have both beneficial and detrimental effects. The next chapter in understanding the airborne microbiome of the built environment is characterizing the various sources of airborne microorganisms and the relative contribution of each. We have identified the following eight major categories of sources of airborne bacteria, viruses, and fungi in the built environment: humans; pets; plants; plumbing systems; heating, ventilation, and air-conditioning systems; mold; dust resuspension; and the outdoor environment. Certain species are associated with certain sources, but the full potential of source characterization and source apportionment has not yet been realized. Ideally, future studies will quantify detailed emission rates of microorganisms from each source and will identify the relative contribution of each source to the indoor air microbiome. This information could then be used to probe fundamental relationships between specific sources and human health, to design interventions to improve building health and human health, or even to provide evidence for forensic investigations.

  19. 46 CFR 67.97 - United States built.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 2 2011-10-01 2011-10-01 false United States built. 67.97 Section 67.97 Shipping COAST... DOCUMENTATION OF VESSELS Build Requirements for Vessel Documentation § 67.97 United States built. To be considered built in the United States a vessel must meet both of the following criteria: (a) All...

  20. 46 CFR 67.97 - United States built.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 2 2014-10-01 2014-10-01 false United States built. 67.97 Section 67.97 Shipping COAST... DOCUMENTATION OF VESSELS Build Requirements for Vessel Documentation § 67.97 United States built. To be considered built in the United States a vessel must meet both of the following criteria: (a) All...

  1. 46 CFR 67.97 - United States built.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 2 2013-10-01 2013-10-01 false United States built. 67.97 Section 67.97 Shipping COAST... DOCUMENTATION OF VESSELS Build Requirements for Vessel Documentation § 67.97 United States built. To be considered built in the United States a vessel must meet both of the following criteria: (a) All...

  2. 46 CFR 67.97 - United States built.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 2 2012-10-01 2012-10-01 false United States built. 67.97 Section 67.97 Shipping COAST... DOCUMENTATION OF VESSELS Build Requirements for Vessel Documentation § 67.97 United States built. To be considered built in the United States a vessel must meet both of the following criteria: (a) All...

  3. 46 CFR 67.97 - United States built.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 2 2010-10-01 2010-10-01 false United States built. 67.97 Section 67.97 Shipping COAST... DOCUMENTATION OF VESSELS Build Requirements for Vessel Documentation § 67.97 United States built. To be considered built in the United States a vessel must meet both of the following criteria: (a) All...

  4. 47 CFR 15.23 - Home-built devices.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Home-built devices. 15.23 Section 15.23 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES General § 15.23 Home-built... that the individual builder of home-built equipment may not possess the means to perform...

  5. 47 CFR 15.23 - Home-built devices.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Home-built devices. 15.23 Section 15.23 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL RADIO FREQUENCY DEVICES General § 15.23 Home-built... that the individual builder of home-built equipment may not possess the means to perform...

  6. Missing "Links" in Bioinformatics Education: Expanding Students' Conceptions of Bioinformatics Using a Biodiversity Database of Living and Fossil Reef Corals

    ERIC Educational Resources Information Center

    Nehm, Ross H.; Budd, Ann F.

    2006-01-01

    NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …

  7. Introductory Bioinformatics Exercises Utilizing Hemoglobin and Chymotrypsin to Reinforce the Protein Sequence-Structure-Function Relationship

    ERIC Educational Resources Information Center

    Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany

    2007-01-01

    We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…

  8. Report on the EMBER Project--A European Multimedia Bioinformatics Educational Resource

    ERIC Educational Resources Information Center

    Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc

    2005-01-01

    EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…

  9. Design and Implementation of an Interdepartmental Bioinformatics Program across Life Science Curricula

    ERIC Educational Resources Information Center

    Miskowski, Jennifer A.; Howard, David R.; Abler, Michael L.; Grunwald, Sandra K.

    2007-01-01

    Over the past 10 years, there has been a technical revolution in the life sciences leading to the emergence of a new discipline called bioinformatics. In response, bioinformatics-related topics have been incorporated into various undergraduate courses along with the development of new courses solely focused on bioinformatics. This report describes…

  10. Vertical and Horizontal Integration of Bioinformatics Education: A Modular, Interdisciplinary Approach

    ERIC Educational Resources Information Center

    Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.

    2009-01-01

    Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…

  11. Applying Instructional Design Theories to Bioinformatics Education in Microarray Analysis and Primer Design Workshops

    ERIC Educational Resources Information Center

    Shachak, Aviv; Ophir, Ron; Rubin, Eitan

    2005-01-01

    The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…

  12. Molecular Rotors Built in Porous Materials.

    PubMed

    Comotti, Angiolina; Bracco, Silvia; Sozzani, Piero

    2016-09-20

    Molecules and materials can show dynamic structures in which the dominant mechanism is rotary motion. The single mobile elements are defined as "molecular rotors" and exhibit special properties (compared with their static counterparts), being able in perspective to greatly modulate the dielectric response and form the basis for molecular motors that are designed with the idea of making molecules perform a useful mechanical function. The construction of ordered rotary elements into a solid is a necessary feature for such design, because it enables the alignment of rotors and the fine-tuning of their steric and dipolar interactions. Crystal surfaces or bulk crystals are the most suitable to adapt rotors in 2D or 3D arrangements and engineer juxtaposition of the rotors in an ordered way. Nevertheless, it is only in recent times that materials showing porosity and remarkably low density have undergone tremendous development. The characteristics of large free volume combine well with the virtually unhindered motion of the molecular rotors built into their structure. Indeed, the molecular rotors are used as struts in porous covalent and supramolecular architectures, spanning both hybrid and fully organic materials. The modularity of the approach renders possible a variety of rotor geometrical arrangements in both robust frameworks stable up to 850 K and self-assembled molecular materials. A nanosecond (fast dynamics) motional regime can be achieved at temperatures lower than 240 K, enabling rotor arrays operating in the solid state even at low temperatures. Furthermore, in nanoporous materials, molecular rotors can interact with the diffusing chemical species, be they liquids, vapors, or gases. Through this chemical intervention, rotor speed can be modulated at will, enabling a new generation of rotor-containing materials sensitive to guests. In principle, an applied electric field can be the stimulus for chemical release from porous materials. The effort needed to

  13. Molecular Rotors Built in Porous Materials.

    PubMed

    Comotti, Angiolina; Bracco, Silvia; Sozzani, Piero

    2016-09-20

    Molecules and materials can show dynamic structures in which the dominant mechanism is rotary motion. The single mobile elements are defined as "molecular rotors" and exhibit special properties (compared with their static counterparts), being able in perspective to greatly modulate the dielectric response and form the basis for molecular motors that are designed with the idea of making molecules perform a useful mechanical function. The construction of ordered rotary elements into a solid is a necessary feature for such design, because it enables the alignment of rotors and the fine-tuning of their steric and dipolar interactions. Crystal surfaces or bulk crystals are the most suitable to adapt rotors in 2D or 3D arrangements and engineer juxtaposition of the rotors in an ordered way. Nevertheless, it is only in recent times that materials showing porosity and remarkably low density have undergone tremendous development. The characteristics of large free volume combine well with the virtually unhindered motion of the molecular rotors built into their structure. Indeed, the molecular rotors are used as struts in porous covalent and supramolecular architectures, spanning both hybrid and fully organic materials. The modularity of the approach renders possible a variety of rotor geometrical arrangements in both robust frameworks stable up to 850 K and self-assembled molecular materials. A nanosecond (fast dynamics) motional regime can be achieved at temperatures lower than 240 K, enabling rotor arrays operating in the solid state even at low temperatures. Furthermore, in nanoporous materials, molecular rotors can interact with the diffusing chemical species, be they liquids, vapors, or gases. Through this chemical intervention, rotor speed can be modulated at will, enabling a new generation of rotor-containing materials sensitive to guests. In principle, an applied electric field can be the stimulus for chemical release from porous materials. The effort needed to

  14. Potential Conservation of Circadian Clock Proteins in the phylum Nematoda as Revealed by Bioinformatic Searches

    PubMed Central

    Romanowski, Andrés; Garavaglia, Matías Javier; Goya, María Eugenia; Ghiringhelli, Pablo Daniel; Golombek, Diego Andrés

    2014-01-01

    Although several circadian rhythms have been described in C. elegans, its molecular clock remains elusive. In this work we employed a novel bioinformatic approach, applying probabilistic methodologies, to search for circadian clock proteins of several of the best studied circadian model organisms of different taxa (Mus musculus, Drosophila melanogaster, Neurospora crassa, Arabidopsis thaliana and Synechoccocus elongatus) in the proteomes of C. elegans and other members of the phylum Nematoda. With this approach we found that the Nematoda contain proteins most related to the core and accessory proteins of the insect and mammalian clocks, which provide new insights into the nematode clock and the evolution of the circadian system. PMID:25396739

  15. Potential conservation of circadian clock proteins in the phylum Nematoda as revealed by bioinformatic searches.

    PubMed

    Romanowski, Andrés; Garavaglia, Matías Javier; Goya, María Eugenia; Ghiringhelli, Pablo Daniel; Golombek, Diego Andrés

    2014-01-01

    Although several circadian rhythms have been described in C. elegans, its molecular clock remains elusive. In this work we employed a novel bioinformatic approach, applying probabilistic methodologies, to search for circadian clock proteins of several of the best studied circadian model organisms of different taxa (Mus musculus, Drosophila melanogaster, Neurospora crassa, Arabidopsis thaliana and Synechoccocus elongatus) in the proteomes of C. elegans and other members of the phylum Nematoda. With this approach we found that the Nematoda contain proteins most related to the core and accessory proteins of the insect and mammalian clocks, which provide new insights into the nematode clock and the evolution of the circadian system.

  16. Bioinformatics for precision medicine in oncology: principles and application to the SHIVA clinical trial

    PubMed Central

    Servant, Nicolas; Roméjon, Julien; Gestraud, Pierre; La Rosa, Philippe; Lucotte, Georges; Lair, Séverine; Bernard, Virginie; Zeitouni, Bruno; Coffin, Fanny; Jules-Clément, Gérôme; Yvon, Florent; Lermine, Alban; Poullet, Patrick; Liva, Stéphane; Pook, Stuart; Popova, Tatiana; Barette, Camille; Prud’homme, François; Dick, Jean-Gabriel; Kamal, Maud; Le Tourneau, Christophe; Barillot, Emmanuel; Hupé, Philippe

    2014-01-01

    Precision medicine (PM) requires the delivery of individually adapted medical care based on the genetic characteristics of each patient and his/her tumor. The last decade witnessed the development of high-throughput technologies such as microarrays and next-generation sequencing which paved the way to PM in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. Our ability to use this information in daily practice relies strongly on the availability of an efficient bioinformatics system that assists in the translation of knowledge from the bench towards molecular targeting and diagnosis. Clinical trials and routine diagnoses constitute different approaches, both requiring a strong bioinformatics environment capable of (i) warranting the integration and the traceability of data, (ii) ensuring the correct processing and analyses of genomic data, and (iii) applying well-defined and reproducible procedures for workflow management and decision-making. To address the issues, a seamless information system was developed at Institut Curie which facilitates the data integration and tracks in real-time the processing of individual samples. Moreover, computational pipelines were developed to identify reliably genomic alterations and mutations from the molecular profiles of each patient. After a rigorous quality control, a meaningful report is delivered to the clinicians and biologists for the therapeutic decision. The complete bioinformatics environment and the key points of its implementation are presented in the context of the SHIVA clinical trial, a multicentric randomized phase II trial comparing targeted therapy based on tumor molecular profiling versus conventional therapy in patients with refractory cancer. The numerous challenges faced in practice during the setting up and the conduct of this trial are discussed as an illustration of PM application. PMID:24910641

  17. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  18. Biochip microsystem for bioinformatics recognition and analysis

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng (Inventor); Fang, Wai-Chi (Inventor)

    2011-01-01

    A system with applications in pattern recognition, or classification, of DNA assay samples. Because DNA reference and sample material in wells of an assay may be caused to fluoresce depending upon dye added to the material, the resulting light may be imaged onto an embodiment comprising an array of photodetectors and an adaptive neural network, with applications to DNA analysis. Other embodiments are described and claimed.

  19. The Roots of Bioinformatics in Theoretical Biology

    PubMed Central

    Hogeweg, Paulien

    2011-01-01

    From the late 1980s onward, the term “bioinformatics” mostly has been used to refer to computational methods for comparative analysis of genome data. However, the term was originally more widely defined as the study of informatic processes in biotic systems. In this essay, I will trace this early history (from a personal point of view) and I will argue that the original meaning of the term is re-emerging. PMID:21483479

  20. [Bioinformatics in Cancer Clinical Sequencing -- An Emerging Field of Cancer Personalized Medicine].

    PubMed

    Kato, Mamoru

    2016-04-01

    Thus far, bioinformatics has mostly been applied in basic science research. It was initially used to analyze protein sequences in unicellular organisms, aiding discoveries in basic biology. Following the completion of human genome sequencing, it has also facilitated numerous discoveries in basic medicine. Recently, several clinical applications of bioinformatics have been reported. Most relevantly, bioinformatics has been applied to clinical sequencing - an emerging field of personalized medicine, or precision medicine. In this review, I will introduce basic techniques of bioinformatics used in clinical sequencing, avoiding excessive technical details. I will also discuss future directions for data analysis using bioinformatics in the field of personalized medicine.

  1. Inter-laboratory study of human in vitro toxicogenomics-based tests as alternative methods for evaluating chemical carcinogenicity: a bioinformatics perspective.

    PubMed

    Herwig, R; Gmuender, H; Corvi, R; Bloch, K M; Brandenburg, A; Castell, J; Ceelen, L; Chesne, C; Doktorova, T Y; Jennen, D; Jennings, P; Limonciel, A; Lock, E A; McMorrow, T; Phrakonkham, P; Radford, R; Slattery, C; Stierum, R; Vilardell, M; Wittenberger, T; Yildirimman, R; Ryan, M; Rogiers, V; Kleinjans, J

    2016-09-01

    The assessment of the carcinogenic potential of chemicals with alternative, human-based in vitro systems has become a major goal of toxicogenomics. The central read-out of these assays is the transcriptome, and while many studies exist that explored the gene expression responses of such systems, reports on robustness and reproducibility, when testing them independently in different laboratories, are still uncommon. Furthermore, there is limited knowledge about variability induced by the data analysis protocols. We have conducted an inter-laboratory study for testing chemical carcinogenicity evaluating two human in vitro assays: hepatoma-derived cells and hTERT-immortalized renal proximal tubule epithelial cells, representing liver and kidney as major target organs. Cellular systems were initially challenged with thirty compounds, genome-wide gene expression was measured with microarrays, and hazard classifiers were built from this training set. Subsequently, each system was independently established in three different laboratories, and gene expression measurements were conducted using anonymized compounds. Data analysis was performed independently by two separate groups applying different protocols for the assessment of inter-laboratory reproducibility and for the prediction of carcinogenic hazard. As a result, both workflows came to very similar conclusions with respect to (1) identification of experimental outliers, (2) overall assessment of robustness and inter-laboratory reproducibility and (3) re-classification of the unknown compounds to the respective toxicity classes. In summary, the developed bioinformatics workflows deliver accurate measures for inter-laboratory comparison studies, and the study can be used as guidance for validation of future carcinogenicity assays in order to implement testing of human in vitro alternatives to animal testing.

  2. Rise and Demise of Bioinformatics? Promise and Progress

    PubMed Central

    Ouzounis, Christos A.

    2012-01-01

    The field of bioinformatics and computational biology has gone through a number of transformations during the past 15 years, establishing itself as a key component of new biology. This spectacular growth has been challenged by a number of disruptive changes in science and technology. Despite the apparent fatigue of the linguistic use of the term itself, bioinformatics has grown perhaps to a point beyond recognition. We explore both historical aspects and future trends and argue that as the field expands, key questions remain unanswered and acquire new meaning while at the same time the range of applications is widening to cover an ever increasing number of biological disciplines. These trends appear to be pointing to a redefinition of certain objectives, milestones, and possibly the field itself. PMID:22570600

  3. A survey on evolutionary algorithm based hybrid intelligence in bioinformatics.

    PubMed

    Li, Shan; Kang, Liying; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  4. Bioinformatics and Microarray Data Analysis on the Cloud.

    PubMed

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data. PMID:25863787

  5. A review of estimation of distribution algorithms in bioinformatics

    PubMed Central

    Armañanzas, Rubén; Inza, Iñaki; Santana, Roberto; Saeys, Yvan; Flores, Jose Luis; Lozano, Jose Antonio; Peer, Yves Van de; Blanco, Rosa; Robles, Víctor; Bielza, Concha; Larrañaga, Pedro

    2008-01-01

    Evolutionary search algorithms have become an essential asset in the algorithmic toolbox for solving high-dimensional optimization problems in across a broad range of bioinformatics problems. Genetic algorithms, the most well-known and representative evolutionary search technique, have been the subject of the major part of such applications. Estimation of distribution algorithms (EDAs) offer a novel evolutionary paradigm that constitutes a natural and attractive alternative to genetic algorithms. They make use of a probabilistic model, learnt from the promising solutions, to guide the search process. In this paper, we set out a basic taxonomy of EDA techniques, underlining the nature and complexity of the probabilistic model of each EDA variant. We review a set of innovative works that make use of EDA techniques to solve challenging bioinformatics problems, emphasizing the EDA paradigm's potential for further research in this domain. PMID:18822112

  6. Bioinformatics and Microarray Data Analysis on the Cloud.

    PubMed

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.

  7. Meeting Review: 2002 O'Reilly Bioinformatics Technology Conference

    PubMed Central

    2002-01-01

    At the end of January I travelled to the States to speak at and attend the first O’Reilly Bioinformatics Technology Conference [14]. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences. Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O’Reilly himself, Tim O’Reilly. There were presentations, tutorials, debates, quizzes and even a ‘jam session’ for musical bioinformaticists. PMID:18628852

  8. Bioinformatics tools for small genomes, such as hepatitis B virus.

    PubMed

    Bell, Trevor G; Kramvis, Anna

    2015-02-01

    DNA sequence analysis is undertaken in many biological research laboratories. The workflow consists of several steps involving the bioinformatic processing of biological data. We have developed a suite of web-based online bioinformatic tools to assist with processing, analysis and curation of DNA sequence data. Most of these tools are genome-agnostic, with two tools specifically designed for hepatitis B virus sequence data. Tools in the suite are able to process sequence data from Sanger sequencing, ultra-deep amplicon resequencing (pyrosequencing) and chromatograph (trace files), as appropriate. The tools are available online at no cost and are aimed at researchers without specialist technical computer knowledge. The tools can be accessed at http://hvdr.bioinf.wits.ac.za/SmallGenomeTools, and the source code is available online at https://github.com/DrTrevorBell/SmallGenomeTools. PMID:25690798

  9. Personalized medicine: challenges and opportunities for translational bioinformatics

    PubMed Central

    Overby, Casey Lynnette; Tarczy-Hornoch, Peter

    2013-01-01

    Personalized medicine can be defined broadly as a model of healthcare that is predictive, personalized, preventive and participatory. Two US President’s Council of Advisors on Science and Technology reports illustrate challenges in personalized medicine (in a 2008 report) and in use of health information technology (in a 2010 report). Translational bioinformatics is a field that can help address these challenges and is defined by the American Medical Informatics Association as “the development of storage, analytic and interpretive methods to optimize the transformation of increasing voluminous biomedical data into proactive, predictive, preventative and participatory health.” This article discusses barriers to implementing genomics applications and current progress toward overcoming barriers, describes lessons learned from early experiences of institutions engaged in personalized medicine and provides example areas for translational bioinformatics research inquiry. PMID:24039624

  10. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    PubMed Central

    Li, Shan; Zhao, Xing-Ming

    2014-01-01

    With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs) are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks. PMID:24729969

  11. Integration of bioinformatics into an undergraduate biology curriculum and the impact on development of mathematical skills.

    PubMed

    Wightman, Bruce; Hark, Amy T

    2012-01-01

    The development of fields such as bioinformatics and genomics has created new challenges and opportunities for undergraduate biology curricula. Students preparing for careers in science, technology, and medicine need more intensive study of bioinformatics and more sophisticated training in the mathematics on which this field is based. In this study, we deliberately integrated bioinformatics instruction at multiple course levels into an existing biology curriculum. Students in an introductory biology course, intermediate lab courses, and advanced project-oriented courses all participated in new course components designed to sequentially introduce bioinformatics skills and knowledge, as well as computational approaches that are common to many bioinformatics applications. In each course, bioinformatics learning was embedded in an existing disciplinary instructional sequence, as opposed to having a single course where all bioinformatics learning occurs. We designed direct and indirect assessment tools to follow student progress through the course sequence. Our data show significant gains in both student confidence and ability in bioinformatics during individual courses and as course level increases. Despite evidence of substantial student learning in both bioinformatics and mathematics, students were skeptical about the link between learning bioinformatics and learning mathematics. While our approach resulted in substantial learning gains, student "buy-in" and engagement might be better in longer project-based activities that demand application of skills to research problems. Nevertheless, in situations where a concentrated focus on project-oriented bioinformatics is not possible or desirable, our approach of integrating multiple smaller components into an existing curriculum provides an alternative.

  12. Squid – a simple bioinformatics grid

    PubMed Central

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-01-01

    Background BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Results Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Conclusion Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example. PMID:16078998

  13. Observing Quantum Monodromy: AN Energy-Momentum Map Built from Experimentally-Determined Level Energies Obtained from the νb{7} Far-Infrared Band System of Ncncs

    NASA Astrophysics Data System (ADS)

    Tokaryk, Dennis W.; Ross, Stephen Cary; Winnewisser, Brenda P.; Winnewisser, Manfred; De Lucia, Frank C.; Billinghurst, Brant E.

    2016-06-01

    The concept of Quantum Monodromy (QM) provides a fresh insight into the structure of rovibrational levels in those flexible molecules for which a bending mode can carry the molecule through the linear configuration. To confirm the existence of QM in a molecule required the fruits of several strands of development: the formulation of the abstract mathematical concept of monodromy, including the exploration of its relevance to systems described by classical mechanics and its manifestation in quantum molecular applications; the development of the required spectroscopic technology and computer-aided assignment; and the development of a theoretical model to apply in fitting to the observed data. We present a timeline for each of these strands, converging in our initial confirmation of QM in NCNCS from pure rotational data alone. In that work a Generalised SemiRigid Bender (GSRB) Hamiltonian was fitted to the experimental rotational structure. Rovibrational energies calculated from the fitted GSRB parameters allowed us to construct an ``Energy-Momentum" map and confirm the presence of QM in NCNCS. In further experimental work at the Canadian Light Source Synchrotron we have identified a network of transitions directly connecting the relevant energy levels and thereby have produced a refined Energy Momentum map for NCNCS from experimental measurements alone. This map extends from the ground vibrational level to well above the potential energy barrier, beautifully illustrating the characteristic signature of QM in a system uncomplicated by interaction with other vibrational modes. B. P. Winnewisser et al., Phys. Rev. Lett. 95, 243002 (2005)

  14. Bioinformatics of proteases in the MEROPS database.

    PubMed

    Barrett, Alan J

    2004-05-01

    Proteolytic enzymes represent approximately approximately 2% of the total number of proteins present in all types of organisms. Many of these enzymes are of medical importance, and those that are of potential interest as drug targets can be divided into the endogenous enzymes encoded in the human genome, and the exogenous proteases encoded in the genomes of disease-causing organisms. There are also naturally occurring inhibitors of proteases, some of which have pharmaceutical relevance. The MEROPS database provides a rich source of information on proteases and their inhibitors. Storage and retrieval of this information is facilitated by the use of a hierarchical classification system (which was pioneered by the compilers of the database) in which homologous proteases and their inhibitors are divided into clans and families. PMID:15216937

  15. [A review on the bioinformatics pipelines for metagenomic research].

    PubMed

    Ye, Dan-Dan; Fan, Meng-Meng; Guan, Qiong; Chen, Hong-Ju; Ma, Zhan-Shan

    2012-12-01

    Metagenome, a term first dubbed by Handelsman in 1998 as "the genomes of the total microbiota found in nature", refers to sequence data directly sampled from the environment (which may be any habitat in which microbes live, such as the guts of humans and animals, milk, soil, lakes, glaciers, and oceans). Metagenomic technologies originated from environmental microbiology studies and their wide application has been greatly facilitated by next-generation high throughput sequencing technologies. Like genomics studies, the bottle neck of metagenomic research is how to effectively and efficiently analyze the gigantic amount of metagenomic sequence data using the bioinformatics pipelines to obtain meaningful biological insights. In this article, we briefly review the state-of-the-art bioinformatics software tools in metagenomic research. Due to the differences between the metagenomic data obtained from whole genome sequencing (i.e., shotgun metagenomics) and amplicon sequencing (i.e., 16S-rRNA and gene-targeted metagenomics) methods, there are significant differences between the corresponding bioinformatics tools for these data; accordingly, we review the computational pipelines separately for these two types of data. PMID:23266976

  16. GOBLET: The Global Organisation for Bioinformatics Learning, Education and Training

    PubMed Central

    Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.

    2015-01-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076

  17. Best practices in bioinformatics training for life scientists.

    PubMed

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrönen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K

    2013-09-01

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists. PMID:23803301

  18. Bioinformatics: Current practice and future challenges for life science education.

    PubMed

    Hack, Catherine; Kendall, Gary

    2005-03-01

    It is widely predicted that the application of high-throughput technologies to the quantification and identification of biological molecules will cause a paradigm shift in the life sciences. However, if the biosciences are to evolve from a predominantly descriptive discipline to an information science, practitioners will require enhanced skills in mathematics, computing, and statistical analysis. Universities have responded to the widely perceived skills gap primarily by developing masters programs in bioinformatics, resulting in a rapid expansion in the provision of postgraduate bioinformatics education. There is, however, a clear need to improve the quantitative and analytical skills of life science undergraduates. This article reviews the response of academia in the United Kingdom and proposes the learning outcomes that graduates should achieve to cope with the new biology. While the analysis discussed here uses the development of bioinformatics education in the United Kingdom as an illustrative example, it is hoped that the issues raised will resonate with all those involved in curriculum development in the life sciences.

  19. Best practices in bioinformatics training for life scientists.

    PubMed

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrönen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K

    2013-09-01

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.

  20. Best practices in bioinformatics training for life scientists

    PubMed Central

    Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D.; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L.; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C.; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrönen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K.

    2013-01-01

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists. PMID:23803301

  1. Data capture in bioinformatics: requirements and experiences with Pedro

    PubMed Central

    Jameson, Daniel; Garwood, Kevin; Garwood, Chris; Booth, Tim; Alper, Pinar; Oliver, Stephen G; Paton, Norman W

    2008-01-01

    Background The systematic capture of appropriately annotated experimental data is a prerequisite for most bioinformatics analyses. Data capture is required not only for submission of data to public repositories, but also to underpin integrated analysis, archiving, and sharing – both within laboratories and in collaborative projects. The widespread requirement to capture data means that data capture and annotation are taking place at many sites, but the small scale of the literature on tools, techniques and experiences suggests that there is work to be done to identify good practice and reduce duplication of effort. Results This paper reports on experience gained in the deployment of the Pedro data capture tool in a range of representative bioinformatics applications. The paper makes explicit the requirements that have recurred when capturing data in different contexts, indicates how these requirements are addressed in Pedro, and describes case studies that illustrate where the requirements have arisen in practice. Conclusion Data capture is a fundamental activity for bioinformatics; all biological data resources build on some form of data capture activity, and many require a blend of import, analysis and annotation. Recurring requirements in data capture suggest that model-driven architectures can be used to construct data capture infrastructures that can be rapidly configured to meet the needs of individual use cases. We have described how one such model-driven infrastructure, namely Pedro, has been deployed in representative case studies, and discussed the extent to which the model-driven approach has been effective in practice. PMID:18402673

  2. Bioinformatics of the Paracoccidioides brasiliensis EST Project.

    PubMed

    Brígido, Marcelo M; Walter, Maria Emília M T; Oliveira, Adilton G; Inoue, Marcus K; Anjos, Daniel S; Sandes, Edans F O; Gondim, João J; Carvalho, Maria José de A; Almeida, Nalvo F; Felipe, Maria Sueli Soares

    2005-06-30

    Paracoccidioides brasiliensis is the etiological agent of paracoccidioidomycosis, an endemic mycosis of Latin America. This fungus presents a dimorphic character; it grows as a mycelium at room temperature, but it is isolated as yeast from infected individuals. It is believed that the transition from mycelium to yeast is important for the infective process. The Functional and Differential Genome of Paracoccidioides brasiliensis Project--PbGenome Project was developed to study the infection process by analyzing expressed sequence tags--ESTs, isolated from both mycelial and yeast forms. The PbGenome Project was executed by a consortium that included 70 researchers (professors and students) from two sequencing laboratories of the midwest region of Brazil; this project produced 25,741 ESTs, 19,718 of which with sufficient quality to be analyzed. We describe the computational procedures used to receive process, analyze these ESTs, and help with their functional annotations; we also detail the services that were used for sequence data exploration. Various programs were compared for filtering and grouping the sequences, and they were adapted to a user-friendly interface. This system made the analysis of the differential transcriptome of P. brasiliensis possible.

  3. BioZone Exploting Source-Capability Information for Integrated Access to Multiple Bioinformatics Data Sources

    SciTech Connect

    Liu, L; Buttler, D; Paques, H; Pu, C; Critchlow

    2002-01-28

    Modern Bioinformatics data sources are widely used by molecular biologists for homology searching and new drug discovery. User-friendly and yet responsive access is one of the most desirable properties for integrated access to the rapidly growing, heterogeneous, and distributed collection of data sources. The increasing volume and diversity of digital information related to bioinformatics (such as genomes, protein sequences, protein structures, etc.) have led to a growing problem that conventional data management systems do not have, namely finding which information sources out of many candidate choices are the most relevant and most accessible to answer a given user query. We refer to this problem as the query routing problem. In this paper we introduce the notation and issues of query routing, and present a practical solution for designing a scalable query routing system based on multi-level progressive pruning strategies. The key idea is to create and maintain source-capability profiles independently, and to provide algorithms that can dynamically discover relevant information sources for a given query through the smart use of source profiles. Compared to the keyword-based indexing techniques adopted in most of the search engines and software, our approach offers fine-granularity of interest matching, thus it is more powerful and effective for handling queries with complex conditions.

  4. The OAuth 2.0 Web Authorization Protocol for the Internet Addiction Bioinformatics (IABio) Database.

    PubMed

    Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young

    2016-03-01

    Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA. PMID:27103887

  5. The OAuth 2.0 Web Authorization Protocol for the Internet Addiction Bioinformatics (IABio) Database.

    PubMed

    Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young

    2016-03-01

    Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA.

  6. The OAuth 2.0 Web Authorization Protocol for the Internet Addiction Bioinformatics (IABio) Database

    PubMed Central

    Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin

    2016-01-01

    Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA. PMID:27103887

  7. 5. Conveyors 'C' through 'K' (built 1940) south of Station, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. Conveyors 'C' through 'K' (built 1940) south of Station, looking northeast from hurricane barrier. - Manchester Street Generating Station, Conveyors, 460 Eddy Street, Providence, Providence County, RI

  8. APOLLO 13: The Spirit that Built America

    NASA Technical Reports Server (NTRS)

    1974-01-01

    APOLLO 13: Nixon commends the crew of APOLLO 13 From the film documentary 'APOLLO 13: 'Houston, We've got a problem'', part of a documentary series on the APOLLO missions made in the early '70's and narrated by Burgess Meredith. APOLO 13 : Third manned lunar landing attempt with James A. Lovell, Jr., John L. Swigert, Jr., and Fred w. Haise, Jr. Pressure lost in SM oxygen system; mission aborted; LM used for life support. Mission Duration 142hrs 54mins 41sec

  9. BioSig: An imaging bioinformatic system for studying phenomics

    SciTech Connect

    Parvin, Bahram; Yang, Qing; Fontenay, Gerald; Barcellos-Hoff, Mary Helen

    2002-07-01

    Organisms express their genomes in a cell-specific manner, resulting in a variety of cellular phenotypes or phenomes. Mapping cell phenomes under a variety of experimental conditions is necessary in order to understand the responses of organisms to stimuli. Representing such data requires an integrated view of experimental and informatic protocols. BioSig provides the foundation for cataloging cellular responses as a function of specific conditioning, treatment, staining, etc. for either in vivo or in vitro studies. A data model has been developed to capture a wide variety of experimental conditions and map them to image collections and their computed high-level representations. Samples are imaged with light microscopy and each image is represented with an attributed graph. The graph representation contains information about cellular morphology, protein localization, and organization of the cells in the corresponding tissue or cultured colony. The informatics architecture is distribute d and enables database content to be shared among multiple researchers.

  10. Built-in-test by signature inspection (bitsi)

    DOEpatents

    Bergeson, Gary C.; Morneau, Richard A.

    1991-01-01

    A system and method for fault detection for electronic circuits. A stimulus generator sends a signal to the input of the circuit under test. Signature inspection logic compares the resultant signal from test nodes on the circuit to an expected signal. If the signals do not match, the signature inspection logic sends a signal to the control logic for indication of fault detection in the circuit. A data input multiplexer between the test nodes of the circuit under test and the signature inspection logic can provide for identification of the specific node at fault by the signature inspection logic. Control logic responsive to the signature inspection logic conveys information about fault detection for use in determining the condition of the circuit. When used in conjunction with a system test controller, the built-in test by signature inspection system and method can be used to poll a plurality of circuits automatically and continuous for faults and record the results of such polling in the system test controller.

  11. YPED: An Integrated Bioinformatics Suite and Database for Mass Spectrometry-based Proteomics Research

    PubMed Central

    Colangelo, Christopher M.; Shifman, Mark; Cheung, Kei-Hoi; Stone, Kathryn L.; Carriero, Nicholas J.; Gulcicek, Erol E.; Lam, TuKiet T.; Wu, Terence; Bjornson, Robert D.; Bruce, Can; Nairn, Angus C.; Rinehart, Jesse; Miller, Perry L.; Williams, Kenneth R.

    2015-01-01

    We report a significantly-enhanced bioinformatics suite and database for proteomics research called Yale Protein Expression Database (YPED) that is used by investigators at more than 300 institutions worldwide. YPED meets the data management, archival, and analysis needs of a high-throughput mass spectrometry-based proteomics research ranging from a single laboratory, group of laboratories within and beyond an institution, to the entire proteomics community. The current version is a significant improvement over the first version in that it contains new modules for liquid chromatography–tandem mass spectrometry (LC–MS/MS) database search results, label and label-free quantitative proteomic analysis, and several scoring outputs for phosphopeptide site localization. In addition, we have added both peptide and protein comparative analysis tools to enable pairwise analysis of distinct peptides/proteins in each sample and of overlapping peptides/proteins between all samples in multiple datasets. We have also implemented a targeted proteomics module for automated multiple reaction monitoring (MRM)/selective reaction monitoring (SRM) assay development. We have linked YPED’s database search results and both label-based and label-free fold-change analysis to the Skyline Panorama repository for online spectra visualization. In addition, we have built enhanced functionality to curate peptide identifications into an MS/MS peptide spectral library for all of our protein database search identification results. PMID:25712262

  12. Transformative Learning: Innovating Sustainability Education in Built Environment

    ERIC Educational Resources Information Center

    Iyer-Raniga, Usha; Andamon, Mary Myla

    2016-01-01

    Purpose: This paper aims to evaluate how transformative learning is key to innovating sustainability education in the built environment in the region's universities, in addition to reporting on the research project undertaken to integrate sustainability thinking and practice into engineering/built environment curricula in Asia-Pacific…

  13. 2. Occident Terminal Elevator, annex on left built 1930, workhouse, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. Occident Terminal Elevator, annex on left built 1930, workhouse, train shed and annex on right built 1925. Occident Elevator Co., Division of Russell-Miller Milling Co., N.D. - Occident Terminal Elevator & Storage Annex, South side of second slip, north from outer end of Rice's Point, east of Garfield Avenue, Duluth, St. Louis County, MN

  14. Managing, Analysing, and Integrating Big Data in Medical Bioinformatics: Open Problems and Future Perspectives

    PubMed Central

    Merelli, Ivan; Pérez-Sánchez, Horacio; Gesing, Sandra; D'Agostino, Daniele

    2014-01-01

    The explosion of the data both in the biomedical research and in the healthcare systems demands urgent solutions. In particular, the research in omics sciences is moving from a hypothesis-driven to a data-driven approach. Healthcare is additionally always asking for a tighter integration with biomedical data in order to promote personalized medicine and to provide better treatments. Efficient analysis and interpretation of Big Data opens new avenues to explore molecular biology, new questions to ask about physiological and pathological states, and new ways to answer these open issues. Such analyses lead to better understanding of diseases and development of better and personalized diagnostics and therapeutics. However, such progresses are directly related to the availability of new solutions to deal with this huge amount of information. New paradigms are needed to store and access data, for its annotation and integration and finally for inferring knowledge and making it available to researchers. Bioinformatics can be viewed as the “glue” for all these processes. A clear awareness of present high performance computing (HPC) solutions in bioinformatics, Big Data analysis paradigms for computational biology, and the issues that are still open in the biomedical and healthcare fields represent the starting point to win this challenge. PMID:25254202

  15. Opportunities and challenges provided by cloud repositories for bioinformatics-enabled drug discovery.

    PubMed

    Dalpé, Gratien; Joly, Yann

    2014-09-01

    Healthcare-related bioinformatics databases are increasingly offering the possibility to maintain, organize, and distribute DNA sequencing data. Different national and international institutions are currently hosting such databases that offer researchers website platforms where they can obtain sequencing data on which they can perform different types of analysis. Until recently, this process remained mostly one-dimensional, with most analysis concentrated on a limited amount of data. However, newer genome sequencing technology is producing a huge amount of data that current computer facilities are unable to handle. An alternative approach has been to start adopting cloud computing services for combining the information embedded in genomic and model system biology data, patient healthcare records, and clinical trials' data. In this new technological paradigm, researchers use virtual space and computing power from existing commercial or not-for-profit cloud service providers to access, store, and analyze data via different application programming interfaces. Cloud services are an alternative to the need of larger data storage; however, they raise different ethical, legal, and social issues. The purpose of this Commentary is to summarize how cloud computing can contribute to bioinformatics-based drug discovery and to highlight some of the outstanding legal, ethical, and social issues that are inherent in the use of cloud services.

  16. GITIRBio: A Semantic and Distributed Service Oriented-Architecture for Bioinformatics Pipeline.

    PubMed

    Castillo, Luis F; López-Gartner, Germán; Isaza, Gustavo A; Sánchez, Mariana; Arango, Jeferson; Agudelo-Valencia, Daniel; Castaño, Sergio

    2015-05-20

    The need to process large quantities of data generated from genomic sequencing has resulted in a difficult task for life scientists who are not familiar with the use of command-line operations or developments in high performance computing and parallelization. This knowledge gap, along with unfamiliarity with necessary processes, can hinder the execution of data processing tasks. Furthermore, many of the commonly used bioinformatics tools for the scientific community are presented as isolated, unrelated entities that do not provide an integrated, guided, and assisted interaction with the scheduling facilities of computational resources or distribution, processing and mapping with runtime analysis. This paper presents the first approximation of a Web Services platform-based architecture (GITIRBio) that acts as a distributed front-end system for autonomous and assisted processing of parallel bioinformatics pipelines that has been validated using multiple sequences. Additionally, this platform allows integration with semantic repositories of genes for search annotations. GITIRBio is available at: http://c-head.ucaldas.edu.co:8080/gitirbio.

  17. Opportunities and challenges provided by cloud repositories for bioinformatics-enabled drug discovery.

    PubMed

    Dalpé, Gratien; Joly, Yann

    2014-09-01

    Healthcare-related bioinformatics databases are increasingly offering the possibility to maintain, organize, and distribute DNA sequencing data. Different national and international institutions are currently hosting such databases that offer researchers website platforms where they can obtain sequencing data on which they can perform different types of analysis. Until recently, this process remained mostly one-dimensional, with most analysis concentrated on a limited amount of data. However, newer genome sequencing technology is producing a huge amount of data that current computer facilities are unable to handle. An alternative approach has been to start adopting cloud computing services for combining the information embedded in genomic and model system biology data, patient healthcare records, and clinical trials' data. In this new technological paradigm, researchers use virtual space and computing power from existing commercial or not-for-profit cloud service providers to access, store, and analyze data via different application programming interfaces. Cloud services are an alternative to the need of larger data storage; however, they raise different ethical, legal, and social issues. The purpose of this Commentary is to summarize how cloud computing can contribute to bioinformatics-based drug discovery and to highlight some of the outstanding legal, ethical, and social issues that are inherent in the use of cloud services. PMID:25195583

  18. An Abstract Description Approach to the Discovery and Classification of Bioinformatics Web Sources

    SciTech Connect

    Rocco, D; Critchlow, T J

    2003-05-01

    The World Wide Web provides an incredible resource to genomics researchers in the form of dynamic data sources--e.g. BLAST sequence homology search interfaces. The growth rate of these sources outpaces the speed at which they can be manually classified, meaning that the available data is not being utilized to its full potential. Existing research has not addressed the problems of automatically locating, classifying, and integrating classes of bioinformatics data sources. This paper presents an overview of a system for finding classes of bioinformatics data sources and integrating them behind a unified interface. We examine an approach to classifying these sources automatically that relies on an abstract description format: the service class description. This format allows a domain expert to describe the important features of an entire class of services without tying that description to any particular Web source. We present the features of this description format in the context of BLAST sources to show how the service class description relates to Web sources that are being described. We then show how a service class description can be used to classify an arbitrary Web source to determine if that source is an instance of the described service. To validate the effectiveness of this approach, we have constructed a prototype that can correctly classify approximately two-thirds of the BLAST sources we tested. We then examine these results, consider the factors that affect correct automatic classification, and discuss future work.

  19. Documenting the emergence of bio-ontologies: or, why researching bioinformatics requires HPSSB.

    PubMed

    Leonelli, Sabina

    2010-01-01

    This paper reflects on the analytic challenges emerging from the study of bioinformatic tools recently created to store and disseminate biological data, such as databases, repositories, and bio-ontologies. I focus my discussion on the Gene Ontology, a term that defines three entities at once: a classification system facilitating the distribution and use of genomic data as evidence towards new insights; an expert community specialised in the curation of those data; and a scientific institution promoting the use of this tool among experimental biologists. These three dimensions of the Gene Ontology can be clearly distinguished analytically, but are tightly intertwined in practice. I suggest that this is true of all bioinformatic tools: they need to be understood simultaneously as epistemic, social, and institutional entities, since they shape the knowledge extracted from data and at the same time regulate the organisation, development, and communication of research. This viewpoint has one important implication for the methodologies used to study these tools; that is, the need to integrate historical, philosophical, and sociological approaches. I illustrate this claim through examples of misunderstandings that may result from a narrowly disciplinary study of the Gene Ontology, as I experienced them in my own research.

  20. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  1. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach.

  2. Microbase2.0: a generic framework for computationally intensive bioinformatics workflows in the cloud.

    PubMed

    Flanagan, Keith; Nakjang, Sirintra; Hallinan, Jennifer; Harwood, Colin; Hirt, Robert P; Pocock, Matthew R; Wipat, Anil

    2012-01-01

    As bioinformatics datasets grow ever larger, and analyses become increasingly complex, there is a need for data handling infrastructures to keep pace with developing technology. One solution is to apply Grid and Cloud technologies to address the computational requirements of analysing high throughput datasets. We present an approach for writing new, or wrapping existing applications, and a reference implementation of a framework, Microbase2.0, for executing those applications using Grid and Cloud technologies. We used Microbase2.0 to develop an automated Cloud-based bioinformatics workflow executing simultaneously on two different Amazon EC2 data centres and the Newcastle University Condor Grid. Several CPU years' worth of computational work was performed by this system in less than two months. The workflow produced a detailed dataset characterising the cellular localisation of 3,021,490 proteins from 867 taxa, including bacteria, archaea and unicellular eukaryotes. Microbase2.0 is freely available from http://www.microbase.org.uk/. PMID:23001322

  3. Subcarrier multiplexing system with built-in dispersion reduction

    SciTech Connect

    Sargis, P.D.; Haigh, R.E.; McCammon, K.G.

    1995-09-08

    Dispersion is effectively reduced in a 1550-nm subcarrier-multiplexed fiber link by using optical pre-filtering at the receiver. Recent experimental results demonstrate transmission of two 2.5 Gbit/s data channels over 220 km of ordinary single-mode fiber.

  4. Applications of Support Vector Machines In Chemo And Bioinformatics

    NASA Astrophysics Data System (ADS)

    Jayaraman, V. K.; Sundararajan, V.

    2010-10-01

    Conventional linear & nonlinear tools for classification, regression & data driven modeling are being replaced on a rapid scale by newer techniques & tools based on artificial intelligence and machine learning. While the linear techniques are not applicable for inherently nonlinear problems, newer methods serve as attractive alternatives for solving real life problems. Support Vector Machine (SVM) classifiers are a set of universal feed-forward network based classification algorithms that have been formulated from statistical learning theory and structural risk minimization principle. SVM regression closely follows the classification methodology. In this work recent applications of SVM in Chemo & Bioinformatics will be described with suitable illustrative examples.

  5. The Austronesian Basic Vocabulary Database: From Bioinformatics to Lexomics

    PubMed Central

    Greenhill, Simon J.; Blust, Robert; Gray, Russell D.

    2008-01-01

    Phylogenetic methods have revolutionised evolutionary biology and have recently been applied to studies of linguistic and cultural evolution. However, the basic comparative data on the languages of the world required for these analyses is often widely dispersed in hard to obtain sources. Here we outline how our Austronesian Basic Vocabulary Database (ABVD) helps remedy this situation by collating wordlists from over 500 languages into one web-accessible database. We describe the technology underlying the ABVD and discuss the benefits that an evolutionary bioinformatic approach can provide. These include facilitating computational comparative linguistic research, answering questions about human prehistory, enabling syntheses with genetic data, and safe-guarding fragile linguistic information. PMID:19204825

  6. SOAP-based services provided by the European Bioinformatics Institute

    PubMed Central

    Pillai, S.; Silventoinen, V.; Kallio, K.; Senger, M.; Sobhany, S.; Tate, J.; Velankar, S.; Golovin, A.; Henrick, K.; Rice, P.; Stoehr, P.; Lopez, R.

    2005-01-01

    SOAP (Simple Object Access Protocol) () based Web Services technology () has gained much attention as an open standard enabling interoperability among applications across heterogeneous architectures and different networks. The European Bioinformatics Institute (EBI) is using this technology to provide robust data retrieval and data analysis mechanisms to the scientific community and to enhance utilization of the biological resources it already provides [N. Harte, V. Silventoinen, E. Quevillon, S. Robinson, K. Kallio, X. Fustero, P. Patel, P. Jokinen and R. Lopez (2004) Nucleic Acids Res., 32, 3–9]. These services are available free to all users from . PMID:15980463

  7. Bioinformatics pipeline for functional identification and characterization of proteins

    NASA Astrophysics Data System (ADS)

    Skarzyńska, Agnieszka; Pawełkowicz, Magdalena; Krzywkowski, Tomasz; Świerkula, Katarzyna; PlÄ der, Wojciech; Przybecki, Zbigniew

    2015-09-01

    The new sequencing methods, called Next Generation Sequencing gives an opportunity to possess a vast amount of data in short time. This data requires structural and functional annotation. Functional identification and characterization of predicted proteins could be done by in silico approches, thanks to a numerous computational tools available nowadays. However, there is a need to confirm the results of proteins function prediction using different programs and comparing the results or confirm experimentally. Here we present a bioinformatics pipeline for structural and functional annotation of proteins.

  8. The Patentability of Biomolecules – Does Online Bioinformatics Compromise Novelty?

    PubMed Central

    2002-01-01

    Researchers are becoming increasingly concerned that the confidentiality of their novel biomolecule sequences is being jeopardised, particularly when these sequences are either submitted to sequence databases or uploaded as query terms onto internet-based bioinformatic software suites. The researcher's fears stem from the fact that the actual uploading of their sequences acts as a novelty destroying prior disclosure or publication, and that this may subsequently preclude valid patent protection for the sequences. This article addresses the key issues involved in the analyses of biomolecules, highlighting potential risks taken by many researchers in regard to patent protection and suggests possible ways in which these risks may be mitigated. PMID:18628841

  9. Biophysics and bioinformatics of transcription regulation in bacteria and bacteriophages

    NASA Astrophysics Data System (ADS)

    Djordjevic, Marko

    2005-11-01

    Due to rapid accumulation of biological data, bioinformatics has become a very important branch of biological research. In this thesis, we develop novel bioinformatic approaches and aid design of biological experiments by using ideas and methods from statistical physics. Identification of transcription factor binding sites within the regulatory segments of genomic DNA is an important step towards understanding of the regulatory circuits that control expression of genes. We propose a novel, biophysics based algorithm, for the supervised detection of transcription factor (TF) binding sites. The method classifies potential binding sites by explicitly estimating the sequence-specific binding energy and the chemical potential of a given TF. In contrast with the widely used information theory based weight matrix method, our approach correctly incorporates saturation in the transcription factor/DNA binding probability. This results in a significant reduction in the number of expected false positives, and in the explicit appearance---and determination---of a binding threshold. The new method was used to identify likely genomic binding sites for the Escherichia coli TFs, and to examine the relationship between TF binding specificity and degree of pleiotropy (number of regulatory targets). We next address how parameters of protein-DNA interactions can be obtained from data on protein binding to random oligos under controlled conditions (SELEX experiment data). We show that 'robust' generation of an appropriate data set is achieved by a suitable modification of the standard SELEX procedure, and propose a novel bioinformatic algorithm for analysis of such data. Finally, we use quantitative data analysis, bioinformatic methods and kinetic modeling to analyze gene expression strategies of bacterial viruses. We study bacteriophage Xp10 that infects rice pathogen Xanthomonas oryzae. Xp10 is an unusual bacteriophage, which has morphology and genome organization that most closely

  10. The Austronesian Basic Vocabulary Database: from bioinformatics to lexomics.

    PubMed

    Greenhill, Simon J; Blust, Robert; Gray, Russell D

    2008-01-01

    Phylogenetic methods have revolutionised evolutionary biology and have recently been applied to studies of linguistic and cultural evolution. However, the basic comparative data on the languages of the world required for these analyses is often widely dispersed in hard to obtain sources. Here we outline how our Austronesian Basic Vocabulary Database (ABVD) helps remedy this situation by collating wordlists from over 500 languages into one web-accessible database. We describe the technology underlying the ABVD and discuss the benefits that an evolutionary bioinformatic approach can provide. These include facilitating computational comparative linguistic research, answering questions about human prehistory, enabling syntheses with genetic data, and safe-guarding fragile linguistic information.

  11. Technosciences in Academia: Rethinking a Conceptual Framework for Bioinformatics Undergraduate Curricula

    NASA Astrophysics Data System (ADS)

    Symeonidis, Iphigenia Sofia

    This paper aims to elucidate guiding concepts for the design of powerful undergraduate bioinformatics degrees which will lead to a conceptual framework for the curriculum. "Powerful" here should be understood as having truly bioinformatics objectives rather than enrichment of existing computer science or life science degrees on which bioinformatics degrees are often based. As such, the conceptual framework will be one which aims to demonstrate intellectual honesty in regards to the field of bioinformatics. A synthesis/conceptual analysis approach was followed as elaborated by Hurd (1983). The approach takes into account the following: bioinfonnatics educational needs and goals as expressed by different authorities, five undergraduate bioinformatics degrees case-studies, educational implications of bioinformatics as a technoscience and approaches to curriculum design promoting interdisciplinarity and integration. Given these considerations, guiding concepts emerged and a conceptual framework was elaborated. The practice of bioinformatics was given a closer look, which led to defining tool-integration skills and tool-thinking capacity as crucial areas of the bioinformatics activities spectrum. It was argued, finally, that a process-based curriculum as a variation of a concept-based curriculum (where the concepts are processes) might be more conducive to the teaching of bioinformatics given a foundational first year of integrated science education as envisioned by Bialek and Botstein (2004). Furthermore, the curriculum design needs to define new avenues of communication and learning which bypass the traditional disciplinary barriers of academic settings as undertaken by Tador and Tidmor (2005) for graduate studies.

  12. Bioinformatics visualization and integration with open standards: the Bluejay genomic browser.

    PubMed

    Turinsky, Andrei L; Ah-Seng, Andrew C; Gordon, Paul M K; Stromer, Julie N; Taschuk, Morgan L; Xu, Emily W; Sensen, Christoph W

    2005-01-01

    We have created a new Java-based integrated computational environment for the exploration of genomic data, called Bluejay. The system is capable of using almost any XML file related to genomic data. Non-XML data sources can be accessed via a proxy server. Bluejay has several features, which are new to Bioinformatics, including an unlimited semantic zoom capability, coupled with Scalable Vector Graphics (SVG) outputs; an implementation of the XLink standard, which features access to MAGPIE Genecards as well as any BioMOBY service accessible over the Internet; and the integration of gene chip analysis tools with the functional assignments. The system can be used as a signed web applet, Web Start, and a local stand-alone application, with or without connection to the Internet. It is available free of charge and as open source via http://bluejay.ucalgary.ca. PMID:15972014

  13. FY02 CBNP Annual Report Input: Bioinformatics Support for CBNP Research and Deployments

    SciTech Connect

    Slezak, T; Wolinsky, M

    2002-10-31

    The events of FY01 dynamically reprogrammed the objectives of the CBNP bioinformatics support team, to meet rapidly-changing Homeland Defense needs and requests from other agencies for assistance: Use computational techniques to determine potential unique DNA signature candidates for microbial and viral pathogens of interest to CBNP researcher and to our collaborating partner agencies such as the Centers for Disease Control and Prevention (CDC), U.S. Department of Agriculture (USDA), Department of Defense (DOD), and Food and Drug Administration (FDA). Develop effective electronic screening measures for DNA signatures to reduce the cost and time of wet-bench screening. Build a comprehensive system for tracking the development and testing of DNA signatures. Build a chain-of-custody sample tracking system for field deployment of the DNA signatures as part of the BASIS project. Provide computational tools for use by CBNP Biological Foundations researchers.

  14. Built environment and elderly population health: a comprehensive literature review.

    PubMed

    Garin, Noe; Olaya, Beatriz; Miret, Marta; Ayuso-Mateos, Jose Luis; Power, Michael; Bucciarelli, Paola; Haro, Josep Maria

    2014-01-01

    Global population aging over recent years has been linked to poorer health outcomes and higher healthcare expenditure. Policies focusing on healthy aging are currently being developed but a complete understanding of health determinants is needed to guide these efforts. The built environment and other external factors have been added to the International Classification of Functioning as important determinants of health and disability. Although the relationship between the built environment and health has been widely examined in working age adults, research focusing on elderly people is relatively recent. In this review, we provide a comprehensive synthesis of the evidence on the built environment and health in the elderly.

  15. Introduction of electrodehydrators with built-in jet mixers

    SciTech Connect

    Gershuni, S.S.; Baimbetov, A.M.; Idrisova, T.S.; Makhov, A.F.

    1985-09-01

    This paper describes an effective technique of crude oil desalting which is recirculation of water within the electrodehydrator by means of built-in jet mixers. Vertical electrodehydrators with built-in jet mixers have been tested and approved at the Novo-Ufa refinery. Design and operation of the vessel is described. Results from analyses of the oil during the test period are summarized. Retrofitting of electrodehydrators with built-in jet mixers proved increased capacity and the consumption of water and electric power in desalting was cut in half while oil loss in the electric desalting units was reduced substantially.

  16. Built Environment and Elderly Population Health: A Comprehensive Literature Review

    PubMed Central

    Garin, Noe; Olaya, Beatriz; Miret, Marta; Ayuso-Mateos, Jose Luis; Power, Michael; Bucciarelli, Paola; Haro, Josep Maria

    2014-01-01

    Global population aging over recent years has been linked to poorer health outcomes and higher healthcare expenditure. Policies focusing on healthy aging are currently being developed but a complete understanding of health determinants is needed to guide these efforts. The built environment and other external factors have been added to the International Classification of Functioning as important determinants of health and disability. Although the relationship between the built environment and health has been widely examined in working age adults, research focusing on elderly people is relatively recent. In this review, we provide a comprehensive synthesis of the evidence on the built environment and health in the elderly. PMID:25356084

  17. Web services at the European Bioinformatics Institute-2009

    PubMed Central

    Mcwilliam, Hamish; Valentin, Franck; Goujon, Mickael; Li, Weizhong; Narayanasamy, Menaka; Martin, Jenny; Miyar, Teresa; Lopez, Rodrigo

    2009-01-01

    The European Bioinformatics Institute (EMBL-EBI) has been providing access to mainstream databases and tools in bioinformatics since 1997. In addition to the traditional web form based interfaces, APIs exist for core data resources such as EMBL-Bank, Ensembl, UniProt, InterPro, PDB and ArrayExpress. These APIs are based on Web Services (SOAP/REST) interfaces that allow users to systematically access databases and analytical tools. From the user's point of view, these Web Services provide the same functionality as the browser-based forms. However, using the APIs frees the user from web page constraints and are ideal for the analysis of large batches of data, performing text-mining tasks and the casual or systematic evaluation of mathematical models in regulatory networks. Furthermore, these services are widespread and easy to use; require no prior knowledge of the technology and no more than basic experience in programming. In the following we wish to inform of new and updated services as well as briefly describe planned developments to be made available during the course of 2009–2010. PMID:19435877

  18. MOWServ: a web client for integration of bioinformatic resources

    PubMed Central

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo

    2010-01-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794

  19. The MPI Bioinformatics Toolkit for protein sequence analysis

    PubMed Central

    Biegert, Andreas; Mayer, Christian; Remmert, Michael; Söding, Johannes; Lupas, Andrei N.

    2006-01-01

    The MPI Bioinformatics Toolkit is an interactive web service which offers access to a great variety of public and in-house bioinformatics tools. They are grouped into different sections that support sequence searches, multiple alignment, secondary and tertiary structure prediction and classification. Several public tools are offered in customized versions that extend their functionality. For example, PSI-BLAST can be run against regularly updated standard databases, customized user databases or selectable sets of genomes. Another tool, Quick2D, integrates the results of various secondary structure, transmembrane and disorder prediction programs into one view. The Toolkit provides a friendly and intuitive user interface with an online help facility. As a key feature, various tools are interconnected so that the results of one tool can be forwarded to other tools. One could run PSI-BLAST, parse out a multiple alignment of selected hits and send the results to a cluster analysis tool. The Toolkit framework and the tools developed in-house will be packaged and freely available under the GNU Lesser General Public Licence (LGPL). The Toolkit can be accessed at . PMID:16845021

  20. MOWServ: a web client for integration of bioinformatic resources.

    PubMed

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J; Claros, M Gonzalo; Trelles, Oswaldo

    2010-07-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user's tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/.