Science.gov

Sample records for bioinformatics system built

  1. ebTrack: an environmental bioinformatics system built upon ArrayTrack™

    PubMed Central

    Chen, Minjun; Martin, Jackson; Fang, Hong; Isukapalli, Sastry; Georgopoulos, Panos G; Welsh, William J; Tong, Weida

    2009-01-01

    ebTrack is being developed as an integrated bioinformatics system for environmental research and analysis by addressing the issues of integration, curation, management, first level analysis and interpretation of environmental and toxicological data from diverse sources. It is based on enhancements to the US FDA developed ArrayTrack™ system through additional analysis modules for gene expression data as well as through incorporation and linkages to modules for analysis of proteomic and metabonomic datasets that include tandem mass spectra. ebTrack uses a client-server architecture with the free and open source PostgreSQL as its database engine, and java tools for user interface, analysis, visualization, and web-based deployment. Several predictive tools that are critical for environmental health research are currently supported in ebTrack, including Significance Analysis of Microarray (SAM). Furthermore, new tools are under continuous integration, and interfaces to environmental health risk analysis tools are being developed in order to make ebTrack widely usable. These health risk analysis tools include the Modeling ENvironment for TOtal Risk studies (MENTOR) for source-to-dose exposure modeling and the DOse Response Information ANalysis system (DORIAN) for health outcome modeling. The design of ebTrack is presented in detail and steps involved in its application are summarized through an illustrative application. PMID:19278561

  2. Taking Bioinformatics to Systems Medicine.

    PubMed

    van Kampen, Antoine H C; Moerland, Perry D

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.

  3. Planning bioinformatics workflows using an expert system.

    PubMed

    Chen, Xiaoling; Chang, Jeffrey T

    2017-04-15

    Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu.

  4. Bioinformatics and systems biology research update from the 15(th) International Conference on Bioinformatics (InCoB2016).

    PubMed

    Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba

    2016-12-22

    The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.

  5. Systems Biology: The Next Frontier for Bioinformatics

    PubMed Central

    Likić, Vladimir A.; McConville, Malcolm J.; Lithgow, Trevor; Bacic, Antony

    2010-01-01

    Biochemical systems biology augments more traditional disciplines, such as genomics, biochemistry and molecular biology, by championing (i) mathematical and computational modeling; (ii) the application of traditional engineering practices in the analysis of biochemical systems; and in the past decade increasingly (iii) the use of near-comprehensive data sets derived from ‘omics platform technologies, in particular “downstream” technologies relative to genome sequencing, including transcriptomics, proteomics and metabolomics. The future progress in understanding biological principles will increasingly depend on the development of temporal and spatial analytical techniques that will provide high-resolution data for systems analyses. To date, particularly successful were strategies involving (a) quantitative measurements of cellular components at the mRNA, protein and metabolite levels, as well as in vivo metabolic reaction rates, (b) development of mathematical models that integrate biochemical knowledge with the information generated by high-throughput experiments, and (c) applications to microbial organisms. The inevitable role bioinformatics plays in modern systems biology puts mathematical and computational sciences as an equal partner to analytical and experimental biology. Furthermore, mathematical and computational models are expected to become increasingly prevalent representations of our knowledge about specific biochemical systems. PMID:21331364

  6. The Online Bioinformatics Resources Collection at the University of Pittsburgh Health Sciences Library System--a one-stop gateway to online bioinformatics databases and software tools.

    PubMed

    Chen, Yi-Bu; Chattopadhyay, Ansuman; Bergen, Phillip; Gadd, Cynthia; Tannery, Nancy

    2007-01-01

    To bridge the gap between the rising information needs of biological and medical researchers and the rapidly growing number of online bioinformatics resources, we have created the Online Bioinformatics Resources Collection (OBRC) at the Health Sciences Library System (HSLS) at the University of Pittsburgh. The OBRC, containing 1542 major online bioinformatics databases and software tools, was constructed using the HSLS content management system built on the Zope Web application server. To enhance the output of search results, we further implemented the Vivísimo Clustering Engine, which automatically organizes the search results into categories created dynamically based on the textual information of the retrieved records. As the largest online collection of its kind and the only one with advanced search results clustering, OBRC is aimed at becoming a one-stop guided information gateway to the major bioinformatics databases and software tools on the Web. OBRC is available at the University of Pittsburgh's HSLS Web site (http://www.hsls.pitt.edu/guides/genetics/obrc).

  7. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    PubMed Central

    Fristensky, Brian

    2007-01-01

    Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351

  8. Bioinformatics training: selecting an appropriate learning content management system--an example from the European Bioinformatics Institute.

    PubMed

    Wright, Victoria Ann; Vaughan, Brendan W; Laurent, Thomas; Lopez, Rodrigo; Brooksbank, Cath; Schneider, Maria Victoria

    2010-11-01

    Today's molecular life scientists are well educated in the emerging experimental tools of their trade, but when it comes to training on the myriad of resources and tools for dealing with biological data, a less ideal situation emerges. Often bioinformatics users receive no formal training on how to make the most of the bioinformatics resources and tools available in the public domain. The European Bioinformatics Institute, which is part of the European Molecular Biology Laboratory (EMBL-EBI), holds the world's most comprehensive collection of molecular data, and training the research community to exploit this information is embedded in the EBI's mission. We have evaluated eLearning, in parallel with face-to-face courses, as a means of training users of our data resources and tools. We anticipate that eLearning will become an increasingly important vehicle for delivering training to our growing user base, so we have undertaken an extensive review of Learning Content Management Systems (LCMSs). Here, we describe the process that we used, which considered the requirements of trainees, trainers and systems administrators, as well as taking into account our organizational values and needs. This review describes the literature survey, user discussions and scripted platform testing that we performed to narrow down our choice of platform from 36 to a single platform. We hope that it will serve as guidance for others who are seeking to incorporate eLearning into their bioinformatics training programmes.

  9. Using Attributes of Natural Systems to Plan the Built Environment

    EPA Science Inventory

    The concept of 'protection' is possible only before something is lost, however, development of the built environment to meet human needs also compromises the environmental systems that sustain human life. Because maintaining an environment that is able to sustain human life requi...

  10. Systems biology and bioinformatics in aging research: a workshop report.

    PubMed

    Fuellen, Georg; Dengjel, Jörn; Hoeflich, Andreas; Hoeijemakers, Jan; Kestler, Hans A; Kowald, Axel; Priebe, Steffen; Rebholz-Schuhmann, Dietrich; Schmeck, Bernd; Schmitz, Ulf; Stolzing, Alexandra; Sühnel, Jürgen; Wuttke, Daniel; Vera, Julio

    2012-12-01

    In an "aging society," health span extension is most important. As in 2010, talks in this series of meetings in Rostock-Warnemünde demonstrated that aging is an apparently very complex process, where computational work is most useful for gaining insights and to find interventions that counter aging and prevent or counteract aging-related diseases. The specific topics of this year's meeting entitled, "RoSyBA: Rostock Symposium on Systems Biology and Bioinformatics in Ageing Research," were primarily related to "Cancer and Aging" and also had a focus on work funded by the German Federal Ministry of Education and Research (BMBF). The next meeting in the series, scheduled for September 20-21, 2013, will focus on the use of ontologies for computational research into aging, stem cells, and cancer. Promoting knowledge formalization is also at the core of the set of proposed action items concluding this report.

  11. Transformers: Shape-Changing Space Systems Built with Robotic Textiles

    NASA Technical Reports Server (NTRS)

    Stoica, Adrian

    2013-01-01

    Prior approaches to transformer-like robots had only very limited success. They suffer from lack of reliability, ability to integrate large surfaces, and very modest change in overall shape. Robots can now be built from two-dimensional (2D) layers of robotic fabric. These transformers, a new kind of robotic space system, are dramatically different from current systems in at least two ways. First, the entire transformer is built from a single, thin sheet; a flexible layer of a robotic fabric (ro-fabric); or robotic textile (ro-textile). Second, the ro-textile layer is foldable to small volume and self-unfolding to adapt shape and function to mission phases.

  12. Graphics processing units in bioinformatics, computational biology and systems biology.

    PubMed

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  13. Stroke of GENEous: A Tool for Teaching Bioinformatics to Information Systems Majors

    ERIC Educational Resources Information Center

    Tikekar, Rahul

    2006-01-01

    A tool for teaching bioinformatics concepts to information systems majors is described. Biological data are available from numerous sources and a good knowledge of biology is needed to understand much of these data. As the subject of bioinformatics gains popularity among computer and information science course offerings, it will become essential…

  14. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis

    PubMed Central

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475

  15. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis.

    PubMed

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.

  16. Systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering.

    PubMed

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-10-11

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering.

  17. Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering

    PubMed Central

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-01-01

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering. PMID:24709875

  18. Bioinformatics for transporter pharmacogenomics and systems biology: data integration and modeling with UML.

    PubMed

    Yan, Qing

    2010-01-01

    Bioinformatics is the rational study at an abstract level that can influence the way we understand biomedical facts and the way we apply the biomedical knowledge. Bioinformatics is facing challenges in helping with finding the relationships between genetic structures and functions, analyzing genotype-phenotype associations, and understanding gene-environment interactions at the systems level. One of the most important issues in bioinformatics is data integration. The data integration methods introduced here can be used to organize and integrate both public and in-house data. With the volume of data and the high complexity, computational decision support is essential for integrative transporter studies in pharmacogenomics, nutrigenomics, epigenetics, and systems biology. For the development of such a decision support system, object-oriented (OO) models can be constructed using the Unified Modeling Language (UML). A methodology is developed to build biomedical models at different system levels and construct corresponding UML diagrams, including use case diagrams, class diagrams, and sequence diagrams. By OO modeling using UML, the problems of transporter pharmacogenomics and systems biology can be approached from different angles with a more complete view, which may greatly enhance the efforts in effective drug discovery and development. Bioinformatics resources of membrane transporters and general bioinformatics databases and tools that are frequently used in transporter studies are also collected here. An informatics decision support system based on the models presented here is available at http://www.pharmtao.com/transporter . The methodology developed here can also be used for other biomedical fields.

  19. [From bioinformatics to systems biology: account of the 12th international conference on intelligent systems in molecular biology].

    PubMed

    Ivakhno, S S

    2004-01-01

    The paper reviews the 12th International Conference on Intelligent Systems for Molecular Biology/Third European Conference on Computational Biology 2004 that was held in Glasgow, UK, during July 31-August 4. A number of talks, papers and software demos from the conference in bioinformatics, genomics, proteomics, transcriptomics and systems biology are described. Recent applications of liquid chromatography - tandem mass spectrometry, comparative genomics and DNA microarrays are given along with the discussion of bioinformatics curricular in higher education.

  20. IFPA meeting 2016 workshop report I: Genomic communication, bioinformatics, trophoblast biology and transport systems.

    PubMed

    Albrecht, Christiane; Baker, Julie C; Blundell, Cassidy; Chavez, Shawn L; Carbone, Lucia; Chamley, Larry; Hannibal, Roberta L; Illsley, Nick; Kurre, Peter; Laurent, Louise C; McKenzie, Charles; Morales-Prieto, Diana; Pantham, Priyadarshini; Paquette, Alison; Powell, Katie; Price, Nathan; Rao, Balaji M; Sadovsky, Yoel; Salomon, Carlos; Tuteja, Geetu; Wilson, Samantha; O'Tierney-Ginn, P F

    2017-01-11

    Workshops are an important part of the IFPA annual meeting as they allow for discussion of specialized topics. At IFPA meeting 2016 there were twelve themed workshops, four of which are summarized in this report. These workshops covered innovative technologies applied to new and traditional areas of placental research: 1) genomic communication; 2) bioinformatics; 3) trophoblast biology and pathology; 4) placental transport systems.

  1. Integration of Proteomics, Bioinformatics, and Systems Biology in Traumatic Brain Injury Biomarker Discovery

    PubMed Central

    Guingab-Cagmat, J.D.; Cagmat, E.B.; Hayes, R.L.; Anagli, J.

    2013-01-01

    Traumatic brain injury (TBI) is a major medical crisis without any FDA-approved pharmacological therapies that have been demonstrated to improve functional outcomes. It has been argued that discovery of disease-relevant biomarkers might help to guide successful clinical trials for TBI. Major advances in mass spectrometry (MS) have revolutionized the field of proteomic biomarker discovery and facilitated the identification of several candidate markers that are being further evaluated for their efficacy as TBI biomarkers. However, several hurdles have to be overcome even during the discovery phase which is only the first step in the long process of biomarker development. The high-throughput nature of MS-based proteomic experiments generates a massive amount of mass spectral data presenting great challenges in downstream interpretation. Currently, different bioinformatics platforms are available for functional analysis and data mining of MS-generated proteomic data. These tools provide a way to convert data sets to biologically interpretable results and functional outcomes. A strategy that has promise in advancing biomarker development involves the triad of proteomics, bioinformatics, and systems biology. In this review, a brief overview of how bioinformatics and systems biology tools analyze, transform, and interpret complex MS datasets into biologically relevant results is discussed. In addition, challenges and limitations of proteomics, bioinformatics, and systems biology in TBI biomarker discovery are presented. A brief survey of researches that utilized these three overlapping disciplines in TBI biomarker discovery is also presented. Finally, examples of TBI biomarkers and their applications are discussed. PMID:23750150

  2. Improving data workflow systems with cloud services and use of open data for bioinformatics research.

    PubMed

    Karim, Md Rezaul; Michel, Audrey; Zappa, Achille; Baranov, Pavel; Sahay, Ratnesh; Rebholz-Schuhmann, Dietrich

    2017-04-16

    Data workflow systems (DWFSs) enable bioinformatics researchers to combine components for data access and data analytics, and to share the final data analytics approach with their collaborators. Increasingly, such systems have to cope with large-scale data, such as full genomes (about 200 GB each), public fact repositories (about 100 TB of data) and 3D imaging data at even larger scales. As moving the data becomes cumbersome, the DWFS needs to embed its processes into a cloud infrastructure, where the data are already hosted. As the standardized public data play an increasingly important role, the DWFS needs to comply with Semantic Web technologies. This advancement to DWFS would reduce overhead costs and accelerate the progress in bioinformatics research based on large-scale data and public resources, as researchers would require less specialized IT knowledge for the implementation. Furthermore, the high data growth rates in bioinformatics research drive the demand for parallel and distributed computing, which then imposes a need for scalability and high-throughput capabilities onto the DWFS. As a result, requirements for data sharing and access to public knowledge bases suggest that compliance of the DWFS with Semantic Web standards is necessary. In this article, we will analyze the existing DWFS with regard to their capabilities toward public open data use as well as large-scale computational and human interface requirements. We untangle the parameters for selecting a preferable solution for bioinformatics research with particular consideration to using cloud services and Semantic Web technologies. Our analysis leads to research guidelines and recommendations toward the development of future DWFS for the bioinformatics research community. © The Author 2017. Published by Oxford University Press.

  3. STRUCTURELAB: a heterogeneous bioinformatics system for RNA structure analysis.

    PubMed

    Shapiro, B A; Kasprzak, W

    1996-08-01

    STRUCTURELAB is a computational system that has been developed to permit the use of a broad array of approaches for the analysis of the structure of RNA. The goal of the development is to provide a large set of tools that can be well integrated with experimental biology to aid in the process of the determination of the underlying structure of RNA sequences. The approach taken views the structure determination problem as one of dealing with a database of many computationally generated structures and provides the capability to analyze this data set from different perspectives. Many algorithms are integrated into one system that also utilizes a heterogeneous computing approach permitting the use of several computer architectures to help solve the posed problems. These different computational platforms make it relatively easy to incorporate currently existing programs as well as newly developed algorithms and to best match these algorithms to the appropriate hardware. The system has been written in Common Lisp running on SUN or SGI Unix workstations, and it utilizes a network of participating machines defined in reconfigurable tables. A window-based interface makes this heterogeneous environment as transparent to the user as possible.

  4. Advances in Omics and Bioinformatics Tools for Systems Analyses of Plant Functions

    PubMed Central

    Mochida, Keiichi; Shinozaki, Kazuo

    2011-01-01

    Omics and bioinformatics are essential to understanding the molecular systems that underlie various plant functions. Recent game-changing sequencing technologies have revitalized sequencing approaches in genomics and have produced opportunities for various emerging analytical applications. Driven by technological advances, several new omics layers such as the interactome, epigenome and hormonome have emerged. Furthermore, in several plant species, the development of omics resources has progressed to address particular biological properties of individual species. Integration of knowledge from omics-based research is an emerging issue as researchers seek to identify significance, gain biological insights and promote translational research. From these perspectives, we provide this review of the emerging aspects of plant systems research based on omics and bioinformatics analyses together with their associated resources and technological advances. PMID:22156726

  5. Building a Built-in Evaluation System: A Case in Point.

    ERIC Educational Resources Information Center

    Bhola, H. S.

    This paper describes a system of built-in, or internal, evaluation used within the Botswana National Literacy Program (NLP). Launched in 1981 and targeted toward illiterate adults and youth aged 10 years and older, the program aims at eradicating illiteracy in Botswana by 1986. The built-in evaluation was implemented in 1983, using program…

  6. Early Warning System: a juridical notion to be built

    NASA Astrophysics Data System (ADS)

    Lucarelli, A.

    2007-12-01

    Early warning systems (EWS) are becoming effective tools for real time mitigation of the harmful effects arising from widely different hazards, which range from famine to financial crisis, malicious attacks, industrial accidents, natural catastrophes, etc. Early warning of natural catastrophic events allows to implement both alert systems and real time prevention actions for the safety of people and goods exposed to the risk However the effective implementation of early warning methods is hindered by the lack of a specific juridical frame. Under a juridical point of view, in fact, EWS and in general all the activities of prevention need a careful regulation, mainly with regards to responsibility and possible compensation for damage caused by the implemented actions. A preventive alarm, in fact, has an active influence on infrastructures in control of public services which in turn will suffer suspensions or interruptions because of the early warning actions. From here it is necessary to possess accurate normative references related to the typology of structures or infrastructures upon which the activity of readiness acts; the progressive order of suspension of public services; the duration of these suspensions; the corporate bodies or administrations that are competent to assume such decisions; the actors responsible for the consequences of false alarm, missed or delayed alarms; the mechanisms of compensation for damage; the insurance systems; etc In the European Union EWS are often quoted as preventive methods of mitigation of the risk. Nevertheless, a juridical notion of EWS of general use is not available. In fact, EW is a concept that finds application in many different circles, each of which require specific adaptations, and may concern subjects for which the European Union doesn't have exclusive competence as may be the responsibility of the member states to assign the necessary regulations. In so far as the juridical arrangement of the EWS, this must be

  7. A patient workflow management system built on guidelines.

    PubMed Central

    Dazzi, L.; Fassino, C.; Saracco, R.; Quaglini, S.; Stefanelli, M.

    1997-01-01

    To provide high quality, shared, and distributed medical care, clinical and organizational issues need to be integrated. This work describes a methodology for developing a Patient Workflow Management System, based on a detailed model of both the medical work process and the organizational structure. We assume that the medical work process is represented through clinical practice guidelines, and that an ontological description of the organization is available. Thus, we developed tools 1) for acquiring the medical knowledge contained into a guideline, 2) to translate the derived formalized guideline into a computational formalism, precisely a Petri Net, 3) to maintain different representation levels. The high level representation guarantees that the Patient Workflow follows the guideline prescriptions, while the low level takes into account the specific organization characteristics and allow allocating resources for managing a specific patient in daily practice. PMID:9357606

  8. A practical, bioinformatic workflow system for large data sets generated by next-generation sequencing.

    PubMed

    Cantacessi, Cinzia; Jex, Aaron R; Hall, Ross S; Young, Neil D; Campbell, Bronwyn E; Joachim, Anja; Nolan, Matthew J; Abubucker, Sahar; Sternberg, Paul W; Ranganathan, Shoba; Mitreva, Makedonka; Gasser, Robin B

    2010-09-01

    Transcriptomics (at the level of single cells, tissues and/or whole organisms) underpins many fields of biomedical science, from understanding the basic cellular function in model organisms, to the elucidation of the biological events that govern the development and progression of human diseases, and the exploration of the mechanisms of survival, drug-resistance and virulence of pathogens. Next-generation sequencing (NGS) technologies are contributing to a massive expansion of transcriptomics in all fields and are reducing the cost, time and performance barriers presented by conventional approaches. However, bioinformatic tools for the analysis of the sequence data sets produced by these technologies can be daunting to researchers with limited or no expertise in bioinformatics. Here, we constructed a semi-automated, bioinformatic workflow system, and critically evaluated it for the analysis and annotation of large-scale sequence data sets generated by NGS. We demonstrated its utility for the exploration of differences in the transcriptomes among various stages and both sexes of an economically important parasitic worm (Oesophagostomum dentatum) as well as the prediction and prioritization of essential molecules (including GTPases, protein kinases and phosphatases) as novel drug target candidates. This workflow system provides a practical tool for the assembly, annotation and analysis of NGS data sets, also to researchers with a limited bioinformatic expertise. The custom-written Perl, Python and Unix shell computer scripts used can be readily modified or adapted to suit many different applications. This system is now utilized routinely for the analysis of data sets from pathogens of major socio-economic importance and can, in principle, be applied to transcriptomics data sets from any organism.

  9. Role of remote sensing, geographical information system (GIS) and bioinformatics in kala-azar epidemiology

    PubMed Central

    Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep

    2011-01-01

    Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another. PMID:23554714

  10. Quantitative Analysis of the Trends Exhibited by the Three Interdisciplinary Biological Sciences: Biophysics, Bioinformatics, and Systems Biology.

    PubMed

    Kang, Jonghoon; Park, Seyeon; Venkat, Aarya; Gopinath, Adarsh

    2015-12-01

    New interdisciplinary biological sciences like bioinformatics, biophysics, and systems biology have become increasingly relevant in modern science. Many papers have suggested the importance of adding these subjects, particularly bioinformatics, to an undergraduate curriculum; however, most of their assertions have relied on qualitative arguments. In this paper, we will show our metadata analysis of a scientific literature database (PubMed) that quantitatively describes the importance of the subjects of bioinformatics, systems biology, and biophysics as compared with a well-established interdisciplinary subject, biochemistry. Specifically, we found that the development of each subject assessed by its publication volume was well described by a set of simple nonlinear equations, allowing us to characterize them quantitatively. Bioinformatics, which had the highest ratio of publications produced, was predicted to grow between 77% and 93% by 2025 according to the model. Due to the large number of publications produced in bioinformatics, which nearly matches the number published in biochemistry, it can be inferred that bioinformatics is almost equal in significance to biochemistry. Based on our analysis, we suggest that bioinformatics be added to the standard biology undergraduate curriculum. Adding this course to an undergraduate curriculum will better prepare students for future research in biology.

  11. BOD: a customizable bioinformatics on demand system accommodating multiple steps and parallel tasks.

    PubMed

    Qiao, Li-An; Zhu, Jing; Liu, Qingyan; Zhu, Tao; Song, Chi; Lin, Wei; Wei, Guozhu; Mu, Lisen; Tao, Jiang; Zhao, Nanming; Yang, Guangwen; Liu, Xiangjun

    2004-01-01

    The integration of bioinformatics resources worldwide is one of the major concerns of the biological community. We herein established the BOD (Bioinformatics on demand) system to use Grid computing technology to set up a virtual workbench via a web-based platform, to assist researchers performing customized comprehensive bioinformatics work. Users will be able to submit entire search queries and computation requests, e.g. from DNA assembly to gene prediction and finally protein folding, from their own office using the BOD end-user web interface. The BOD web portal parses the user's job requests into steps, each of which may contain multiple tasks in parallel. The BOD task scheduler takes an entire task, or splits it into multiple subtasks, and dispatches the task or subtasks proportionally to computation node(s) associated with the BOD portal server. A node may further split and distribute an assigned task to its sub-nodes using a similar strategy. In the end, the BOD portal server receives and collates all results and returns them to the user. BOD uses a pipeline model to describe the user's submitted data and stores the job requests/status/results in a relational database. In addition, an XML criterion is established to capture task computation program details.

  12. A criticality-based framework for task composition in multi-agent bioinformatics integration systems.

    PubMed

    Karasavvas, Konstantinos A; Baldock, Richard; Burger, Albert

    2005-07-15

    During task composition, such as can be found in distributed query processing, workflow systems and AI planning, decisions have to be made by the system and possibly by users with respect to how a given problem should be solved. Although there is often more than one correct way of solving a given problem, these multiple solutions do not necessarily lead to the same result. Some researchers are addressing this problem by providing data provenance information. Others use expert advice encoded in a supporting knowledge-base. In this paper, we propose an approach that assesses the importance of such decisions with respect to the overall result. We present a way of measuring decision criticality and describe its potential use. A multi-agent bioinformatics integration system is used as the basis of a framework that facilitates such functionality. We propose an agent architecture, and a concrete bioinformatics example (prototype) is used to show how certain decisions may not be critical in the context of more complex tasks.

  13. Edge Bioinformatics

    SciTech Connect

    Lo, Chien-Chi

    2015-08-03

    Edge Bioinformatics is a developmental bioinformatics and data management platform which seeks to supply laboratories with bioinformatics pipelines for analyzing data associated with common samples case goals. Edge Bioinformatics enables sequencing as a solution and forward-deployed situations where human-resources, space, bandwidth, and time are limited. The Edge bioinformatics pipeline was designed based on following USE CASES and specific to illumina sequencing reads. 1. Assay performance adjudication (PCR): Analysis of an existing PCR assay in a genomic context, and automated design of a new assay to resolve conflicting results; 2. Clinical presentation with extreme symptoms: Characterization of a known pathogen or co-infection with a. Novel emerging disease outbreak or b. Environmental surveillance

  14. A bioinformatics expert system linking functional data to anatomical outcomes in limb regeneration

    PubMed Central

    Lobo, Daniel; Feldman, Erica B.; Shah, Michelle; Malone, Taylor J.

    2014-01-01

    Abstract Amphibians and molting arthropods have the remarkable capacity to regenerate amputated limbs, as described by an extensive literature of experimental cuts, amputations, grafts, and molecular techniques. Despite a rich history of experimental effort, no comprehensive mechanistic model exists that can account for the pattern regulation observed in these experiments. While bioinformatics algorithms have revolutionized the study of signaling pathways, no such tools have heretofore been available to assist scientists in formulating testable models of large‐scale morphogenesis that match published data in the limb regeneration field. Major barriers to preventing an algorithmic approach are the lack of formal descriptions for experimental regenerative information and a repository to centralize storage and mining of functional data on limb regeneration. Establishing a new bioinformatics of shape would significantly accelerate the discovery of key insights into the mechanisms that implement complex regeneration. Here, we describe a novel mathematical ontology for limb regeneration to unambiguously encode phenotype, manipulation, and experiment data. Based on this formalism, we present the first centralized formal database of published limb regeneration experiments together with a user‐friendly expert system tool to facilitate its access and mining. These resources are freely available for the community and will assist both human biologists and artificial intelligence systems to discover testable, mechanistic models of limb regeneration. PMID:25729585

  15. A bioinformatics expert system linking functional data to anatomical outcomes in limb regeneration.

    PubMed

    Lobo, Daniel; Feldman, Erica B; Shah, Michelle; Malone, Taylor J; Levin, Michael

    2014-04-01

    Amphibians and molting arthropods have the remarkable capacity to regenerate amputated limbs, as described by an extensive literature of experimental cuts, amputations, grafts, and molecular techniques. Despite a rich history of experimental efforts, no comprehensive mechanistic model exists that can account for the pattern regulation observed in these experiments. While bioinformatics algorithms have revolutionized the study of signaling pathways, no such tools have heretofore been available to assist scientists in formulating testable models of large-scale morphogenesis that match published data in the limb regeneration field. Major barriers preventing an algorithmic approach are the lack of formal descriptions for experimental regenerative information and a repository to centralize storage and mining of functional data on limb regeneration. Establishing a new bioinformatics of shape would significantly accelerate the discovery of key insights into the mechanisms that implement complex regeneration. Here, we describe a novel mathematical ontology for limb regeneration to unambiguously encode phenotype, manipulation, and experiment data. Based on this formalism, we present the first centralized formal database of published limb regeneration experiments together with a user-friendly expert system tool to facilitate its access and mining. These resources are freely available for the community and will assist both human biologists and artificial intelligence systems to discover testable, mechanistic models of limb regeneration.

  16. Ergatis: a web interface and scalable software system for bioinformatics workflows

    PubMed Central

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  17. Specifying, Installing and Maintaining Built-Up and Modified Bitumen Roofing Systems.

    ERIC Educational Resources Information Center

    Hobson, Joseph W.

    2000-01-01

    Examines built-up, modified bitumen, and hybrid combinations of the two roofing systems and offers advise on how to assure high- quality performance and durability when using them. Included is a glossary of commercial roofing terms and asphalt roofing resources to aid in making decisions on roofing and systems product selection. (GR)

  18. Specifying, Installing and Maintaining Built-Up and Modified Bitumen Roofing Systems.

    ERIC Educational Resources Information Center

    Hobson, Joseph W.

    2000-01-01

    Examines built-up, modified bitumen, and hybrid combinations of the two roofing systems and offers advise on how to assure high- quality performance and durability when using them. Included is a glossary of commercial roofing terms and asphalt roofing resources to aid in making decisions on roofing and systems product selection. (GR)

  19. Statistics and bioinformatics in nutritional sciences: analysis of complex data in the era of systems biology.

    PubMed

    Fu, Wenjiang J; Stromberg, Arnold J; Viele, Kert; Carroll, Raymond J; Wu, Guoyao

    2010-07-01

    Over the past 2 decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (Type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine growth retardation). (c) 2010 Elsevier Inc. All rights reserved.

  20. Statistics and bioinformatics in nutritional sciences: analysis of complex data in the era of systems biology⋆

    PubMed Central

    Fu, Wenjiang J.; Stromberg, Arnold J.; Viele, Kert; Carroll, Raymond J.; Wu, Guoyao

    2009-01-01

    Over the past two decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine fetal retardation). PMID:20233650

  1. BioWMS: a web-based Workflow Management System for bioinformatics.

    PubMed

    Bartocci, Ezio; Corradini, Flavio; Merelli, Emanuela; Scortichini, Lorenzo

    2007-03-08

    An in-silico experiment can be naturally specified as a workflow of activities implementing, in a standardized environment, the process of data and control analysis. A workflow has the advantage to be reproducible, traceable and compositional by reusing other workflows. In order to support the daily work of a bioscientist, several Workflow Management Systems (WMSs) have been proposed in bioinformatics. Generally, these systems centralize the workflow enactment and do not exploit standard process definition languages to describe, in order to be reusable, workflows. While almost all WMSs require heavy stand-alone applications to specify new workflows, only few of them provide a web-based process definition tool. We have developed BioWMS, a Workflow Management System that supports, through a web-based interface, the definition, the execution and the results management of an in-silico experiment. BioWMS has been implemented over an agent-based middleware. It dynamically generates, from a user workflow specification, a domain-specific, agent-based workflow engine. Our approach exploits the proactiveness and mobility of the agent-based technology to embed, inside agents behaviour, the application domain features. Agents are workflow executors and the resulting workflow engine is a multiagent system - a distributed, concurrent system--typically open, flexible, and adaptative. A demo is available at http://litbio.unicam.it:8080/biowms. BioWMS, supported by Hermes mobile computing middleware, guarantees the flexibility, scalability and fault tolerance required to a workflow enactment over distributed and heterogeneous environment. BioWMS is funded by the FIRB project LITBIO (Laboratory for Interdisciplinary Technologies in Bioinformatics).

  2. Green genes: bioinformatics and systems-biology innovations drive algal biotechnology.

    PubMed

    Reijnders, Maarten J M F; van Heck, Ruben G A; Lam, Carolyn M C; Scaife, Mark A; dos Santos, Vitor A P Martins; Smith, Alison G; Schaap, Peter J

    2014-12-01

    Many species of microalgae produce hydrocarbons, polysaccharides, and other valuable products in significant amounts. However, large-scale production of algal products is not yet competitive against non-renewable alternatives from fossil fuel. Metabolic engineering approaches will help to improve productivity, but the exact metabolic pathways and the identities of the majority of the genes involved remain unknown. Recent advances in bioinformatics and systems-biology modeling coupled with increasing numbers of algal genome-sequencing projects are providing the means to address this. A multidisciplinary integration of methods will provide synergy for a systems-level understanding of microalgae, and thereby accelerate the improvement of industrially valuable strains. In this review we highlight recent advances and challenges to microalgal research and discuss future potential.

  3. Built-in active sensing diagnostic system for civil infrastructure systems

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Chang, Fu-Kuo

    2001-07-01

    A reliable, robust monitoring system can improve the maintenance of and provide safety protection for civil structures and therefore prolong their service lives. A built-in, active sensing diagnostic technique for civil structures has been under investigation. In this technique, piezoelectric materials are used as sensors/actuators to receive and generate signals. The transducers are embedded in reinforced concrete (RC) beams and are designed to detect damage, particularly debonding damage between the reinforcing bars and concrete. This paper presents preliminary results from a feasibility study of the technology. Laboratory experiments performed on RC beams, with piezo-electric sensors and actuators mounted on reinforced steel bars, have clearly demonstrated that the proposed technique could detect debonding damage. Analytical work, using a special purpose finite-element software, PZFlex, was also conducted to interpret the relationship between the measured data and actual debonding damage. Effectiveness of the proposed technique for detecting debonding damage in civil structures has been demonstrated.

  4. Built But Not Used, Needed But Not Built: Ground System Guidance Based On Cassini-Huygens Experience

    NASA Technical Reports Server (NTRS)

    Larsen, Barbara S.

    2006-01-01

    These reflections share insight gleaned from Cassini-Huygens experience in supporting uplink operations tasks with software. Of particular interest are developed applications that were not widely adopted and tasks for which the appropriate application was not planned. After several years of operations, tasks are better understood providing a clearer picture of the mapping of requirements to applications. The impact on system design of the changing user profile due to distributed operations and greater participation of scientists in operations is also explored. Suggestions are made for improving the architecture, requirements, and design of future systems for uplink operations.

  5. Built But Not Used, Needed But Not Built: Ground System Guidance Based On Cassini-Huygens Experience

    NASA Technical Reports Server (NTRS)

    Larsen, Barbara S.

    2006-01-01

    These reflections share insight gleaned from Cassini-Huygens experience in supporting uplink operations tasks with software. Of particular interest are developed applications that were not widely adopted and tasks for which the appropriate application was not planned. After several years of operations, tasks are better understood providing a clearer picture of the mapping of requirements to applications. The impact on system design of the changing user profile due to distributed operations and greater participation of scientists in operations is also explored. Suggestions are made for improving the architecture, requirements, and design of future systems for uplink operations.

  6. A systems approach to resilience in the built environment: the case of Cuba.

    PubMed

    Lizarralde, Gonzalo; Valladares, Arturo; Olivera, Andres; Bornstein, Lisa; Gould, Kevin; Barenstein, Jennifer Duyne

    2015-01-01

    Through its capacity to evoke systemic adaptation before and after disasters, resilience has become a seductive theory in disaster management. Several studies have linked the concept with systems theory; however, they have been mostly based on theoretical models with limited empirical support. The study of the Cuban model of resilience sheds light on the variables that create systemic resilience in the built environment and its relations with the social and natural environments. Cuba is vulnerable to many types of hazard, yet the country's disaster management benefits from institutional, health and education systems that develop social capital, knowledge and other assets that support construction industry and housing development, systematic urban and regional planning, effective alerts, and evacuation plans. The Cuban political context is specific, but the study can nonetheless contribute to systemic improvements to the resilience of built environments in other contexts. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.

  7. Towards a career in bioinformatics.

    PubMed

    Ranganathan, Shoba

    2009-12-03

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation from 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 9-11, 2009 at Biopolis, Singapore. InCoB has actively engaged researchers from the area of life sciences, systems biology and clinicians, to facilitate greater synergy between these groups. To encourage bioinformatics students and new researchers, tutorials and student symposium, the Singapore Symposium on Computational Biology (SYMBIO) were organized, along with the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and the Clinical Bioinformatics (CBAS) Symposium. However, to many students and young researchers, pursuing a career in a multi-disciplinary area such as bioinformatics poses a Himalayan challenge. A collection to tips is presented here to provide signposts on the road to a career in bioinformatics. An overview of the application of bioinformatics to traditional and emerging areas, published in this supplement, is also presented to provide possible future avenues of bioinformatics investigation. A case study on the application of e-learning tools in undergraduate bioinformatics curriculum provides information on how to go impart targeted education, to sustain bioinformatics in the Asia-Pacific region. The next InCoB is scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010.

  8. Towards a career in bioinformatics

    PubMed Central

    2009-01-01

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation from 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 9-11, 2009 at Biopolis, Singapore. InCoB has actively engaged researchers from the area of life sciences, systems biology and clinicians, to facilitate greater synergy between these groups. To encourage bioinformatics students and new researchers, tutorials and student symposium, the Singapore Symposium on Computational Biology (SYMBIO) were organized, along with the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and the Clinical Bioinformatics (CBAS) Symposium. However, to many students and young researchers, pursuing a career in a multi-disciplinary area such as bioinformatics poses a Himalayan challenge. A collection to tips is presented here to provide signposts on the road to a career in bioinformatics. An overview of the application of bioinformatics to traditional and emerging areas, published in this supplement, is also presented to provide possible future avenues of bioinformatics investigation. A case study on the application of e-learning tools in undergraduate bioinformatics curriculum provides information on how to go impart targeted education, to sustain bioinformatics in the Asia-Pacific region. The next InCoB is scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. PMID:19958508

  9. Development of Built-in System for Constructing a Students Experimental Equipment of Control Engineering

    NASA Astrophysics Data System (ADS)

    Morisaki, Tetsuya; Oda, Kazuhiro; Morino, Kazuhiro; Jiang, Zhongwei

    This paper presents a built-in system to construct an experimental tool for aiding to learn control engineering in a students experiment. The proposing system is ease to install to a commercial product has some actuators. Also, we developed a software application to control the modified product by the proposing system. Some of the functions of it are to edit, compile and execute a control algorithm written in C language, to show control process parameters on a real-time basis with a graph and to simulate the behavior of the experimental equipment. Using the system, we not only made an experimental tool from a radio control helicopter but also built the curriculum guidelines of the students experiment. They were performed at Tokuyama College of Technology and the result was evaluated from a questionnaire by students.

  10. Analyses of Brucella Pathogenesis, Host Immunity, and Vaccine Targets using Systems Biology and Bioinformatics

    PubMed Central

    He, Yongqun

    2011-01-01

    Brucella is a Gram-negative, facultative intracellular bacterium that causes zoonotic brucellosis in humans and various animals. Out of 10 classified Brucella species, B. melitensis, B. abortus, B. suis, and B. canis are pathogenic to humans. In the past decade, the mechanisms of Brucella pathogenesis and host immunity have been extensively investigated using the cutting edge systems biology and bioinformatics approaches. This article provides a comprehensive review of the applications of Omics (including genomics, transcriptomics, and proteomics) and bioinformatics technologies for the analysis of Brucella pathogenesis, host immune responses, and vaccine targets. Based on more than 30 sequenced Brucella genomes, comparative genomics is able to identify gene variations among Brucella strains that help to explain host specificity and virulence differences among Brucella species. Diverse transcriptomics and proteomics gene expression studies have been conducted to analyze gene expression profiles of wild type Brucella strains and mutants under different laboratory conditions. High throughput Omics analyses of host responses to infections with virulent or attenuated Brucella strains have been focused on responses by mouse and cattle macrophages, bovine trophoblastic cells, mouse and boar splenocytes, and ram buffy coat. Differential serum responses in humans and rams to Brucella infections have been analyzed using high throughput serum antibody screening technology. The Vaxign reverse vaccinology has been used to predict many Brucella vaccine targets. More than 180 Brucella virulence factors and their gene interaction networks have been identified using advanced literature mining methods. The recent development of community-based Vaccine Ontology and Brucellosis Ontology provides an efficient way for Brucella data integration, exchange, and computer-assisted automated reasoning. PMID:22919594

  11. Analyses of Brucella pathogenesis, host immunity, and vaccine targets using systems biology and bioinformatics.

    PubMed

    He, Yongqun

    2012-01-01

    Brucella is a Gram-negative, facultative intracellular bacterium that causes zoonotic brucellosis in humans and various animals. Out of 10 classified Brucella species, B. melitensis, B. abortus, B. suis, and B. canis are pathogenic to humans. In the past decade, the mechanisms of Brucella pathogenesis and host immunity have been extensively investigated using the cutting edge systems biology and bioinformatics approaches. This article provides a comprehensive review of the applications of Omics (including genomics, transcriptomics, and proteomics) and bioinformatics technologies for the analysis of Brucella pathogenesis, host immune responses, and vaccine targets. Based on more than 30 sequenced Brucella genomes, comparative genomics is able to identify gene variations among Brucella strains that help to explain host specificity and virulence differences among Brucella species. Diverse transcriptomics and proteomics gene expression studies have been conducted to analyze gene expression profiles of wild type Brucella strains and mutants under different laboratory conditions. High throughput Omics analyses of host responses to infections with virulent or attenuated Brucella strains have been focused on responses by mouse and cattle macrophages, bovine trophoblastic cells, mouse and boar splenocytes, and ram buffy coat. Differential serum responses in humans and rams to Brucella infections have been analyzed using high throughput serum antibody screening technology. The Vaxign reverse vaccinology has been used to predict many Brucella vaccine targets. More than 180 Brucella virulence factors and their gene interaction networks have been identified using advanced literature mining methods. The recent development of community-based Vaccine Ontology and Brucellosis Ontology provides an efficient way for Brucella data integration, exchange, and computer-assisted automated reasoning.

  12. Autonomic Nervous System Responses to Viewing Green and Built Settings: Differentiating Between Sympathetic and Parasympathetic Activity

    PubMed Central

    van den Berg, Magdalena M.H.E.; Maas, Jolanda; Muller, Rianne; Braun, Anoek; Kaandorp, Wendy; van Lien, René; van Poppel, Mireille N.M.; van Mechelen, Willem; van den Berg, Agnes E.

    2015-01-01

    This laboratory study explored buffering and recovery effects of viewing urban green and built spaces on autonomic nervous system activity. Forty-six students viewed photos of green and built spaces immediately following, and preceding acute stress induction. Simultaneously recorded electrocardiogram and impedance cardiogram signal was used to derive respiratory sinus arrhythmia (RSA) and pre-ejection period (PEP), indicators of respectively parasympathetic and sympathetic activity. The findings provide support for greater recovery after viewing green scenes, as marked by a stronger increase in RSA as a marker of parasympathetic activity. There were no indications for greater recovery after viewing green scenes in PEP as a marker of sympathetic activity, and there were also no indications of greater buffering effects of green space in neither RSA nor PEP. Overall, our findings are consistent with a predominant role of the parasympathetic nervous system in restorative effects of viewing green space. PMID:26694426

  13. Autonomic Nervous System Responses to Viewing Green and Built Settings: Differentiating Between Sympathetic and Parasympathetic Activity.

    PubMed

    van den Berg, Magdalena M H E; Maas, Jolanda; Muller, Rianne; Braun, Anoek; Kaandorp, Wendy; van Lien, René; van Poppel, Mireille N M; van Mechelen, Willem; van den Berg, Agnes E

    2015-12-14

    This laboratory study explored buffering and recovery effects of viewing urban green and built spaces on autonomic nervous system activity. Forty-six students viewed photos of green and built spaces immediately following, and preceding acute stress induction. Simultaneously recorded electrocardiogram and impedance cardiogram signal was used to derive respiratory sinus arrhythmia (RSA) and pre-ejection period (PEP), indicators of respectively parasympathetic and sympathetic activity. The findings provide support for greater recovery after viewing green scenes, as marked by a stronger increase in RSA as a marker of parasympathetic activity. There were no indications for greater recovery after viewing green scenes in PEP as a marker of sympathetic activity, and there were also no indications of greater buffering effects of green space in neither RSA nor PEP. Overall, our findings are consistent with a predominant role of the parasympathetic nervous system in restorative effects of viewing green space.

  14. Tank Monitoring and Document control System (TMACS) As Built Software Design Document

    SciTech Connect

    GLASSCOCK, J.A.

    2000-01-27

    This document describes the software design for the Tank Monitor and Control System (TMACS). This document captures the existing as-built design of TMACS as of November 1999. It will be used as a reference document to the system maintainers who will be maintaining and modifying the TMACS functions as necessary. The heart of the TMACS system is the ''point-processing'' functionality where a sample value is received from the field sensors and the value is analyzed, logged, or alarmed as required. This Software Design Document focuses on the point-processing functions.

  15. A fast job scheduling system for a wide range of bioinformatic applications.

    PubMed

    Boccia, Angelo; Busiello, Gianluca; Milanesi, Luciano; Paolella, Giovanni

    2007-06-01

    Bioinformatic tools are often used by researchers through interactive Web interfaces, resulting in a strong demand for computational resources. The tools are of different kind and range from simple, quick tasks, to complex analyses requiring minutes to hours of processing time and often longer than that. Batteries of computational nodes, such as those found in parallel clusters, provide a platform of choice for this application, especially when a relatively large number of concurrent requests is expected. Here, we describe a scheduling architecture operating at the application level, able to distribute jobs over a large number of hierarchically organized nodes. While not contrasting and peacefully living together with low-level scheduling software, the system takes advantage of tools, such as SQL servers, commonly used in Web applications, to produce low latency and performance which compares well and often surpasses that of more traditional, dedicated schedulers. The system provides the basic functionality necessary to node selection, task execution and service management and monitoring, and may combine loosely linked computational resources, such as those located in geographically distinct sites.

  16. KDE Bioscience: platform for bioinformatics analysis workflows.

    PubMed

    Lu, Qiang; Hao, Pei; Curcin, Vasa; He, Weizhong; Li, Yuan-Yuan; Luo, Qing-Ming; Guo, Yi-Ke; Li, Yi-Xue

    2006-08-01

    Bioinformatics is a dynamic research area in which a large number of algorithms and programs have been developed rapidly and independently without much consideration so far of the need for standardization. The lack of such common standards combined with unfriendly interfaces make it difficult for biologists to learn how to use these tools and to translate the data formats from one to another. Consequently, the construction of an integrative bioinformatics platform to facilitate biologists' research is an urgent and challenging task. KDE Bioscience is a java-based software platform that collects a variety of bioinformatics tools and provides a workflow mechanism to integrate them. Nucleotide and protein sequences from local flat files, web sites, and relational databases can be entered, annotated, and aligned. Several home-made or 3rd-party viewers are built-in to provide visualization of annotations or alignments. KDE Bioscience can also be deployed in client-server mode where simultaneous execution of the same workflow is supported for multiple users. Moreover, workflows can be published as web pages that can be executed from a web browser. The power of KDE Bioscience comes from the integrated algorithms and data sources. With its generic workflow mechanism other novel calculations and simulations can be integrated to augment the current sequence analysis functions. Because of this flexible and extensible architecture, KDE Bioscience makes an ideal integrated informatics environment for future bioinformatics or systems biology research.

  17. Agile parallel bioinformatics workflow management using Pwrake.

    PubMed

    Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro

    2011-09-08

    In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles

  18. Agile parallel bioinformatics workflow management using Pwrake

    PubMed Central

    2011-01-01

    Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability

  19. The study of calibration and epipolar geometry for the stereo vision system built by fisheye lenses

    NASA Astrophysics Data System (ADS)

    Zhang, Baofeng; Lu, Chunfang; Röning, Juha; Feng, Weijia

    2015-01-01

    Fish-eye lens is a kind of short focal distance (f=6~16mm) camera. The field of view (FOV) of it is near or even exceeded 180×180 degrees. A lot of literatures show that the multiple view geometry system built by fish-eye lens will get larger stereo field than traditional stereo vision system which based on a pair of perspective projection images. Since a fish-eye camera usually has a wider-than-hemispherical FOV, the most of image processing approaches based on the pinhole camera model for the conventional stereo vision system are not satisfied to deal with the applications of this category of stereo vision which built by fish-eye lenses. This paper focuses on discussing the calibration and the epipolar rectification method for a novel machine vision system set up by four fish-eye lenses, which is called Special Stereo Vision System (SSVS). The characteristic of SSVS is that it can produce 3D coordinate information from the whole global observation space and acquiring no blind area 360º×360º panoramic image simultaneously just using single vision equipment with one time static shooting. Parameters calibration and epipolar rectification is the basic for SSVS to realize 3D reconstruction and panoramic image generation.

  20. Factory Built-in Type Simplified OCT System for Industrial Application

    NASA Astrophysics Data System (ADS)

    Shiina, Tatsuo; Miyazaki, Satoshi; Honda, Toshio

    Factory built-in type simplified optical coherence tomography (OCT) system was developed for industrial use. The system design was supposed for check of the laser-welded resin. As a first approach, the current simplified OCT system for plant measurement was applied for the validation of the industrial sample; plastic resin. The industrial-use OCT was designed in response to the results. The performances of the measurement speed and range of the developed OCT system were 50scan/s and 5mm, respectively. The low coherence of 18.9μm could clearly distinguish the gap of 2 laser-welded resins. The system became compact and low price, and has the flexibility of epi-optics.

  1. Expert systems built by the Expert: An evaluation of OPS5

    NASA Technical Reports Server (NTRS)

    Jackson, Robert

    1987-01-01

    Two expert systems were written in OPS5 by the expert, a Ph.D. astronomer with no prior experience in artificial intelligence or expert systems, without the use of a knowledge engineer. The first system was built from scratch and uses 146 rules to check for duplication of scientific information within a pool of prospective observations. The second system was grafted onto another expert system and uses 149 additional rules to estimate the spacecraft and ground resources consumed by a set of prospective observations. The small vocabulary, the IF this occurs THEN do that logical structure of OPS5, and the ability to follow program execution allowed the expert to design and implement these systems with only the data structures and rules of another OPS5 system as an example. The modularity of the rules in OPS5 allowed the second system to modify the rulebase of the system onto which it was grafted without changing the code or the operation of that system. These experiences show that experts are able to develop their own expert systems due to the ease of programming and code reusability in OPS5.

  2. Bioinformatics Analysis of Protein Phosphorylation in Plant Systems Biology Using P3DB.

    PubMed

    Yao, Qiuming; Xu, Dong

    2017-01-01

    Protein phosphorylation is one of the most pervasive protein post-translational modification events in plant cells. It is involved in many plant biological processes, such as plant growth, organ development, and plant immunology, by regulating or switching signaling and metabolic pathways. High-throughput experimental methods like mass spectrometry can easily characterize hundreds to thousands of phosphorylation events in a single experiment. With the increasing volume of the data sets, Plant Protein Phosphorylation DataBase (P3DB, http://p3db.org ) provides a comprehensive, systematic, and interactive online platform to deposit, query, analyze, and visualize these phosphorylation events in many plant species. It stores the protein phosphorylation sites in the context of identified mass spectra, phosphopeptides, and phosphoproteins contributed from various plant proteome studies. In addition, P3DB associates these plant phosphorylation sites to protein physicochemical information in the protein charts and tertiary structures, while various protein annotations from hierarchical kinase phosphatase families, protein domains, and gene ontology are also added into the database. P3DB not only provides rich information, but also interconnects and provides visualization of the data in networks, in systems biology context. Currently, P3DB includes the KiC (Kinase Client) assay network, the protein-protein interaction network, the kinase-substrate network, the phosphatase-substrate network, and the protein domain co-occurrence network. All of these are available to query for and visualize existing phosphorylation events. Although P3DB only hosts experimentally identified phosphorylation data, it provides a plant phosphorylation prediction model for any unknown queries on the fly. P3DB is an entry point to the plant phosphorylation community to deposit and visualize any customized data sets within this systems biology framework. Nowadays, P3DB has become one of the major

  3. Impact of an in-built monitoring system on family planning performance in rural Bangladesh.

    PubMed

    Kabir, Humayun; Gazi, Rukhsana; Ashraf, Ali; Saha, Nirod Chandra

    2007-06-07

    During 1982-1992, the Maternal and Child Health Family Planning (MCH-FP) Extension Project (Rural) of International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B), in partnership with the Ministry of Health and Family Welfare (MoHFW) of the Government of Bangladesh (GoB), implemented a series of interventions in Sirajganj Sadar sub-district of Sirajganj district. These interventions were aimed at improving the planning mechanisms and for reviewing the problem-solving processes to build an effective monitoring system of the interventions at the local level of the overall system of the MOHFW, GoB. The interventions included development and testing of innovative solutions in service-delivery, provision of door-step injectables, and strengthening of the management information system (MIS). The impact of an in-built monitoring system on the overall performance was assessed during the period from June 1995 to December 1996, after the withdrawal of the interventions in 1992. The results of the assessment showed that Family Welfare Assistants (FWAs) increased household-visits within the last two months, and there was a higher use of service-delivery points even after the withdrawal of the interventions. The results of the cluster surveys, conducted in 1996, showed that the selected indicators of health and family-planning services were higher than those reported by the Bangladesh Demographic and Health Survey (BDHS) 1996-1997. During June 1995-December, 1996, the contraceptive prevalence rate (CPR) increased by 13 percentage points (i.e. from 40% to 53%). Compared to the national CPR (49%), this increase was statistically significant (p < 0.05). The in-built monitoring systems, including effective MIS, accompanied by rapid assessments and review of performance by the programme managers, have potentials to improve family planning performance in low-performing areas.

  4. Teaching bioinformatics to engineers.

    PubMed

    Mihalas, George I; Tudor, Anca; Paralescu, Sorin; Andor, Minodora; Stoicu-Tivadar, Lacramioara

    2014-01-01

    The paper refers to our methodology and experience in establishing the content of the course in bioinformatics introduced to the school of "Information Systems in Healthcare" (SIIS), master level. The syllabi of both lectures and laboratory works are presented and discussed.

  5. The Online Bioinformatics Resources Collection at the University of Pittsburgh Health Sciences Library System—a one-stop gateway to online bioinformatics databases and software tools

    PubMed Central

    Chen, Yi-Bu; Chattopadhyay, Ansuman; Bergen, Phillip; Gadd, Cynthia; Tannery, Nancy

    2007-01-01

    To bridge the gap between the rising information needs of biological and medical researchers and the rapidly growing number of online bioinformatics resources, we have created the Online Bioinformatics Resources Collection (OBRC) at the Health Sciences Library System (HSLS) at the University of Pittsburgh. The OBRC, containing 1542 major online bioinformatics databases and software tools, was constructed using the HSLS content management system built on the Zope® Web application server. To enhance the output of search results, we further implemented the Vivísimo Clustering Engine®, which automatically organizes the search results into categories created dynamically based on the textual information of the retrieved records. As the largest online collection of its kind and the only one with advanced search results clustering, OBRC is aimed at becoming a one-stop guided information gateway to the major bioinformatics databases and software tools on the Web. OBRC is available at the University of Pittsburgh's HSLS Web site (). PMID:17108360

  6. The Gaggle: An open-source software system for integrating bioinformatics software and data sources

    PubMed Central

    Shannon, Paul T; Reiss, David J; Bonneau, Richard; Baliga, Nitin S

    2006-01-01

    Background Systems biologists work with many kinds of data, from many different sources, using a variety of software tools. Each of these tools typically excels at one type of analysis, such as of microarrays, of metabolic networks and of predicted protein structure. A crucial challenge is to combine the capabilities of these (and other forthcoming) data resources and tools to create a data exploration and analysis environment that does justice to the variety and complexity of systems biology data sets. A solution to this problem should recognize that data types, formats and software in this high throughput age of biology are constantly changing. Results In this paper we describe the Gaggle -a simple, open-source Java software environment that helps to solve the problem of software and database integration. Guided by the classic software engineering strategy of separation of concerns and a policy of semantic flexibility, it integrates existing popular programs and web resources into a user-friendly, easily-extended environment. We demonstrate that four simple data types (names, matrices, networks, and associative arrays) are sufficient to bring together diverse databases and software. We highlight some capabilities of the Gaggle with an exploration of Helicobacter pylori pathogenesis genes, in which we identify a putative ricin-like protein -a discovery made possible by simultaneous data exploration using a wide range of publicly available data and a variety of popular bioinformatics software tools. Conclusion We have integrated diverse databases (for example, KEGG, BioCyc, String) and software (Cytoscape, DataMatrixViewer, R statistical environment, and TIGR Microarray Expression Viewer). Through this loose coupling of diverse software and databases the Gaggle enables simultaneous exploration of experimental data (mRNA and protein abundance, protein-protein and protein-DNA interactions), functional associations (operon, chromosomal proximity, phylogenetic pattern

  7. Initial clinical testing of a multi-spectral imaging system built on a smartphone platform

    NASA Astrophysics Data System (ADS)

    Mink, Jonah W.; Wexler, Shraga; Bolton, Frank J.; Hummel, Charles; Kahn, Bruce S.; Levitz, David

    2016-03-01

    Multi-spectral imaging systems are often expensive and bulky. An innovative multi-spectral imaging system was fitted onto a mobile colposcope, an imaging system built around a smartphone in order to image the uterine cervix from outside the body. The multi-spectral mobile colposcope (MSMC) acquires images at different wavelengths. This paper presents the clinical testing of MSMC imaging (technical validation of the MSMC system is described elsewhere 1 ). Patients who were referred to colposcopy following abnormal screening test (Pap or HPV DNA test) according to the standard of care were enrolled. Multi-spectral image sets of the cervix were acquired, consisting of images from the various wavelengths. Image acquisition took 1-2 sec. Areas suspected for dysplasia under white light imaging were biopsied, according to the standard of care. Biopsied sites were recorded on a clockface map of the cervix. Following the procedure, MSMC data was processed from the sites of biopsied sites. To date, the initial histopathological results are still outstanding. Qualitatively, structures in the cervical images were sharper at lower wavelengths than higher wavelengths. Patients tolerated imaging well. The result suggests MSMC holds promise for cervical imaging.

  8. Review of ultraresolution (10-100 megapixel) visualization systems built by tiling commercial display components

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Haralson, David G.; Simpson, Matthew A.; Longo, Sam J.

    2002-08-01

    Ultra-resolution visualization systems are achieved by the technique of tiling many direct or project-view displays. During the past fews years, several such systems have been built from commercial electronics components (displays, computers, image generators, networks, communication links, and software). Civil applications driving this development have independently determined that they require images at 10-100 megapixel (Mpx) resolution to enable state-of-the-art research, engineering, design, stock exchanges, flight simulators, business information and enterprise control centers, education, art and entertainment. Military applications also press the art of the possible to improve the productivity of warfighters and lower the cost of providing for the national defense. The environment in some 80% of defense applications can be addressed by ruggedization of commercial components. This paper reviews the status of ultra-resolution systems based on commercial components and describes a vision for their integration into advanced yet affordable military command centers, simulator/trainers, and, eventually, crew stations in air, land, sea and space systems.

  9. Note: Design and implementation of a home-built imaging system with low jitter for cold atom experiments

    SciTech Connect

    Hachtel, A. J.; Gillette, M. C.; Clements, E. R.; Zhong, S.; Weeks, M. R.; Bali, S.

    2016-05-15

    A novel home-built system for imaging cold atom samples is presented using a readily available astronomy camera which has the requisite sensitivity but no timing-control. We integrate the camera with LabVIEW achieving fast, low-jitter imaging with a convenient user-defined interface. We show that our system takes precisely timed millisecond exposures and offers significant improvements in terms of system jitter and readout time over previously reported home-built systems. Our system rivals current commercial “black box” systems in performance and user-friendliness.

  10. An Undergraduate-Built Prototype Altitude Determination System (PADS) for High Altitude Research Balloons.

    NASA Astrophysics Data System (ADS)

    Verner, E.; Bruhweiler, F. C.; Abot, J.; Casarotto, V.; Dichoso, J.; Doody, E.; Esteves, F.; Morsch Filho, E.; Gonteski, D.; Lamos, M.; Leo, A.; Mulder, N.; Matubara, F.; Schramm, P.; Silva, R.; Quisberth, J.; Uritsky, G.; Kogut, A.; Lowe, L.; Mirel, P.; Lazear, J.

    2014-12-01

    In this project a multi-disciplinary undergraduate team from CUA, comprising majors in Physics, Mechanical Engineering, Electrical Engineering, and Biology, design, build, test, fly, and analyze the data from a prototype attitude determination system (PADS). The goal of the experiment is to determine if an inexpensive attitude determination system could be built for high altitude research balloons using MEMS gyros. PADS is a NASA funded project, built by students with the cooperation of CUA faculty, Verner, Bruhweiler, and Abot, along with the contributed expertise of researchers and engineers at NASA/GSFC, Kogut, Lowe, Mirel, and Lazear. The project was initiated through a course taught in CUA's School of Engineering, which was followed by a devoted effort by students during the summer of 2014. The project is an experiment to use 18 MEMS gyros, similar to those used in many smartphones, to produce an averaged positional error signal that could be compared with the motion of the fixed optical system as recorded through a string of optical images of stellar fields to be stored on a hard drive flown with the experiment. The optical system, camera microprocessor, and hard drive are enclosed in a pressure vessel, which maintains approximately atmospheric pressure throughout the balloon flight. The experiment uses multiple microprocessors to control the camera exposures, record gyro data, and provide thermal control. CUA students also participated in NASA-led design reviews. Four students traveled to NASA's Columbia Scientific Balloon Facility in Palestine, Texas to integrate PADS into a large balloon gondola containing other experiments, before being shipped, then launched in mid-August at Ft. Sumner, New Mexico. The payload is to fly at a float altitude of 40-45,000 m, and the flight last approximately 15 hours. The payload is to return to earth by parachute and the retrieved data are to be analyzed by CUA undergraduates. A description of the instrument is presented

  11. Measurement of airflow and pressure characteristics of a fan built in a car ventilation system

    NASA Astrophysics Data System (ADS)

    Pokorný, Jan; Poláček, Filip; Fojtlín, Miloš; Fišer, Jan; Jícha, Miroslav

    2016-03-01

    The aim of this study was to identify a set of operating points of a fan built in ventilation system of our test car. These operating points are given by the fan pressure characteristics and are defined by a pressure drop of the HVAC system (air ducts and vents) and volumetric flow rate of ventilation air. To cover a wide range of pressure drops situations, four cases of vent flaps setup were examined: (1) all vents opened, (2) only central vents closed (3) only central vents opened and (4) all vents closed. To cover a different volumetric flows, the each case was measured at least for four different speeds of fan defined by the fan voltage. It was observed that the pressure difference of the fan is proportional to the fan voltage and strongly depends on the throttling of the air distribution system by the settings of the vents flaps. In case of our test car we identified correlations between volumetric flow rate of ventilation air, fan pressure difference and fan voltage. These correlations will facilitate and reduce time costs of the following experiments with this test car.

  12. Diagnostic biomarkers for renal cell carcinoma: selection using novel bioinformatics systems for microarray data analysis

    PubMed Central

    Osunkoya, Adeboye O; Yin-Goen, Qiqin; Phan, John H; Moffitt, Richard A; Stokes, Todd H; Wang, May D; Young, Andrew N

    2009-01-01

    Summary The differential diagnosis of clear cell, papillary and chromophobe renal cell carcinoma is clinically important, because these tumor subtypes are associated with different pathobiology and clinical behavior. For cases in which histopathology is equivocal, immunohistochemistry and quantitative RT-PCR can assist in the differential diagnosis by measuring expression of subtype-specific biomarkers. Several renal tumor biomarkers have been discovered in expression microarray studies. However, due to heterogeneity of gene and protein expression, additional biomarkers are needed for reliable diagnostic classification. We developed novel bioinformatics systems to identify candidate renal tumor biomarkers from the microarray profiles of 45 clear cell, 16 papillary and 10 chromophobe renal cell carcinoma; the microarray data was derived from two independent published studies. The ArrayWiki biocomputing system merged the microarray datasets into a single file, so gene expression could be analyzed from a larger number of tumors. The caCORRECT system removed non-random sources of error from the microarray data, and the omniBioMarker system analyzed data with several gene-ranking algorithms, in order to identify algorithms effective at recognizing previously described renal tumor biomarkers. We predicted these algorithms would also be effective at identifying unknown biomarkers that could be verified by independent methods. We selected six novel candidate biomakers from the omniBioMarker analysis, and verified their differential expression in formalin-fixed paraffin-embedded tissues by quantitative RT-PCR and immunohistochemistry. The candidate biomarkers were carbonic anhydrase IX, ceruloplasmin, schwannomin-interacting protein 1, E74-like factor 3, cytochrome c oxidase subunit 5a and acetyl-CoA acetyltransferase 1. Quantitative RT-PCR was performed on 17 clear cell, 13 papillary and 7 chromophobe renal cell carcinoma. Carbonic anhydrase IX and ceruloplasmin were

  13. Pise: software for building bioinformatics webs.

    PubMed

    Gilbert, Don

    2002-12-01

    Pise is interface construction software for bioinformatics applications that run by command-line operations. It creates common, easy-to-use interfaces to these applications for the Web, or other uses. It is adaptable to new bioinformatics tools, and offers program chaining, Unix system batch and other controls, making it an attractive method for building and using your own bioinformatics web services.

  14. A Scalable and Integrative System for Pathway Bioinformatics and Systems Biology

    PubMed Central

    Compani, Behnam; Su, Trent; Chang, Ivan; Cheng, Jianlin; Shah, Kandarp H.; Whisenant, Thomas; Dou, Yimeng; Bergmann, Adriel; Cheong, Raymond; Wold, Barbara; Bardwell, Lee; Levchenko, Andre; Baldi, Pierre; Mjolsness, Eric

    2011-01-01

    Motivation Progress in systems biology depends on developing scalable informatics tools to predictively model, visualize, and flexibly store information about complex biological systems. Scalability of these tools, as well as their ability to integrate within larger frameworks of evolving tools, is critical to address the multi-scale and size complexity of biological systems. Results Using current software technology, such as self-generation of database and object code from UML schemas, facilitates rapid updating of a scalable expert assistance system for modeling biological pathways. Distribution of key components along with connectivity to external data sources and analysis tools is achieved via a web service interface. PMID:20865537

  15. Treatment of a steel works effluent with a conventional single-sludge system built in cascades

    SciTech Connect

    Zacharias, B.; Kayser, R.

    1996-11-01

    Wastewater from steel producing plants or from cokery facilities contains a lot of substances like phenols or cyanides which need to be degraded. The requirements on the effluent quality have increased during the last years. Besides the elimination of the organic pollutants, of free cyanide, and of heavy metals, low concentrations of inorganic nitrogen in the effluent are required. Besides the elimination of the pollutants, the security in performance is an important requirement for a wastewater treatment plant treating steel works effluent. In this study, the authors concentrated on the treatment of the complete wastewater stream of a steel work instead of cleaning single streams in order to minimize high peak concentrations of inhibitory substances in the influent. Moreover, wastewater from diffuse sources is gathered and lead to the complete stream and these pollutants are treated as well. Treatment was achieved in a single-sludge system built in cascades. Because of the very low nitrogen concentrations in the effluent required by the controlling authorities, a post-denitrification step was mandatory. The only choices were a single-sludge or at least a three-sludge system. Of these, the first one was favored.

  16. Bioinformatic Indications That COPI- and Clathrin-Based Transport Systems Are Not Present in Chloroplasts: An Arabidopsis Model

    PubMed Central

    Aronsson, Henrik

    2014-01-01

    Coated vesicle transport occurs in the cytosol of yeast, mammals and plants. It consists of three different transport systems, the COPI, COPII and clathrin coated vesicles (CCV), all of which participate in the transfer of proteins and lipids between different cytosolic compartments. There are also indications that chloroplasts have a vesicle transport system. Several putative chloroplast-localized proteins, including CPSAR1 and CPRabA5e with similarities to cytosolic COPII transport-related proteins, were detected in previous experimental and bioinformatics studies. These indications raised the hypothesis that a COPI- and/or CCV-related system may be present in chloroplasts, in addition to a COPII-related system. To test this hypothesis we bioinformatically searched for chloroplast proteins that may have similar functions to known cytosolic COPI and CCV components in the model plants Arabidopsis thaliana and Oryza sativa (subsp. japonica) (rice). We found 29 such proteins, based on domain similarity, in Arabidopsis, and 14 in rice. However, many components could not be identified and among the identified most have assigned roles that are not related to either COPI or CCV transport. We conclude that COPII is probably the only active vesicle system in chloroplasts, at least in the model plants. The evolutionary implications of the findings are discussed. PMID:25137124

  17. Bioinformatic indications that COPI- and clathrin-based transport systems are not present in chloroplasts: an Arabidopsis model.

    PubMed

    Lindquist, Emelie; Alezzawi, Mohamed; Aronsson, Henrik

    2014-01-01

    Coated vesicle transport occurs in the cytosol of yeast, mammals and plants. It consists of three different transport systems, the COPI, COPII and clathrin coated vesicles (CCV), all of which participate in the transfer of proteins and lipids between different cytosolic compartments. There are also indications that chloroplasts have a vesicle transport system. Several putative chloroplast-localized proteins, including CPSAR1 and CPRabA5e with similarities to cytosolic COPII transport-related proteins, were detected in previous experimental and bioinformatics studies. These indications raised the hypothesis that a COPI- and/or CCV-related system may be present in chloroplasts, in addition to a COPII-related system. To test this hypothesis we bioinformatically searched for chloroplast proteins that may have similar functions to known cytosolic COPI and CCV components in the model plants Arabidopsis thaliana and Oryza sativa (subsp. japonica) (rice). We found 29 such proteins, based on domain similarity, in Arabidopsis, and 14 in rice. However, many components could not be identified and among the identified most have assigned roles that are not related to either COPI or CCV transport. We conclude that COPII is probably the only active vesicle system in chloroplasts, at least in the model plants. The evolutionary implications of the findings are discussed.

  18. MEIGO: an open-source software suite based on metaheuristics for global optimization in systems biology and bioinformatics.

    PubMed

    Egea, Jose A; Henriques, David; Cokelaer, Thomas; Villaverde, Alejandro F; MacNamara, Aidan; Danciu, Diana-Patricia; Banga, Julio R; Saez-Rodriguez, Julio

    2014-05-10

    Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods.

  19. MEIGO: an open-source software suite based on metaheuristics for global optimization in systems biology and bioinformatics

    PubMed Central

    2014-01-01

    Background Optimization is the key to solving many problems in computational biology. Global optimization methods, which provide a robust methodology, and metaheuristics in particular have proven to be the most efficient methods for many applications. Despite their utility, there is a limited availability of metaheuristic tools. Results We present MEIGO, an R and Matlab optimization toolbox (also available in Python via a wrapper of the R version), that implements metaheuristics capable of solving diverse problems arising in systems biology and bioinformatics. The toolbox includes the enhanced scatter search method (eSS) for continuous nonlinear programming (cNLP) and mixed-integer programming (MINLP) problems, and variable neighborhood search (VNS) for Integer Programming (IP) problems. Additionally, the R version includes BayesFit for parameter estimation by Bayesian inference. The eSS and VNS methods can be run on a single-thread or in parallel using a cooperative strategy. The code is supplied under GPLv3 and is available at http://www.iim.csic.es/~gingproc/meigo.html. Documentation and examples are included. The R package has been submitted to BioConductor. We evaluate MEIGO against optimization benchmarks, and illustrate its applicability to a series of case studies in bioinformatics and systems biology where it outperforms other state-of-the-art methods. Conclusions MEIGO provides a free, open-source platform for optimization that can be applied to multiple domains of systems biology and bioinformatics. It includes efficient state of the art metaheuristics, and its open and modular structure allows the addition of further methods. PMID:24885957

  20. Experimental Identification of Smartphones Using Fingerprints of Built-In Micro-Electro Mechanical Systems (MEMS)

    PubMed Central

    Baldini, Gianmarco; Steri, Gary; Dimc, Franc; Giuliani, Raimondo; Kamnik, Roman

    2016-01-01

    The correct identification of smartphones has various applications in the field of security or the fight against counterfeiting. As the level of sophistication in counterfeit electronics increases, detection procedures must become more accurate but also not destructive for the smartphone under testing. Some components of the smartphone are more likely to reveal their authenticity even without a physical inspection, since they are characterized by hardware fingerprints detectable by simply examining the data they provide. This is the case of MEMS (Micro Electro-Mechanical Systems) components like accelerometers and gyroscopes, where tiny differences and imprecisions in the manufacturing process determine unique patterns in the data output. In this paper, we present the experimental evaluation of the identification of smartphones through their built-in MEMS components. In our study, three different phones of the same model are subject to repeatable movements (composing a repeatable scenario) using an high precision robotic arm. The measurements from MEMS for each repeatable scenario are collected and analyzed. The identification algorithm is based on the extraction of the statistical features of the collected data for each scenario. The features are used in a support vector machine (SVM) classifier to identify the smartphone. The results of the evaluation are presented for different combinations of features and Inertial Measurement Unit (IMU) outputs, which show that detection accuracy of higher than 90% is achievable. PMID:27271630

  1. Experimental Identification of Smartphones Using Fingerprints of Built-In Micro-Electro Mechanical Systems (MEMS).

    PubMed

    Baldini, Gianmarco; Steri, Gary; Dimc, Franc; Giuliani, Raimondo; Kamnik, Roman

    2016-06-03

    The correct identification of smartphones has various applications in the field of security or the fight against counterfeiting. As the level of sophistication in counterfeit electronics increases, detection procedures must become more accurate but also not destructive for the smartphone under testing. Some components of the smartphone are more likely to reveal their authenticity even without a physical inspection, since they are characterized by hardware fingerprints detectable by simply examining the data they provide. This is the case of MEMS (Micro Electro-Mechanical Systems) components like accelerometers and gyroscopes, where tiny differences and imprecisions in the manufacturing process determine unique patterns in the data output. In this paper, we present the experimental evaluation of the identification of smartphones through their built-in MEMS components. In our study, three different phones of the same model are subject to repeatable movements (composing a repeatable scenario) using an high precision robotic arm. The measurements from MEMS for each repeatable scenario are collected and analyzed. The identification algorithm is based on the extraction of the statistical features of the collected data for each scenario. The features are used in a support vector machine (SVM) classifier to identify the smartphone. The results of the evaluation are presented for different combinations of features and Inertial Measurement Unit (IMU) outputs, which show that detection accuracy of higher than 90% is achievable.

  2. Performance of the prototype gas recirculation system with built-in RGA for INO RPC system

    NASA Astrophysics Data System (ADS)

    Bhuyan, M.; Datar, V. M.; Joshi, A.; Kalmani, S. D.; Mondal, N. K.; Rahman, M. A.; Satyanarayana, B.; Verma, P.

    2012-01-01

    An open loop gas recovery and recirculation system has been developed for the INO RPC system. The gas mixture coming from RPC exhaust is first desiccated by passing through molecular sieve (3 Å+4 Å). Subsequent scrubbing over basic active alumina removes toxic and acidic contaminants. The Isobutane and Freon are then separated by diffusion and liquefied by fractional condensation by cooling up to -26C. A Residual Gas Analyser (RGA) is being used in the loop to study the performance of the recirculation system. The results of the RGA analysis will be discussed.

  3. Crowdsourcing for bioinformatics

    PubMed Central

    Good, Benjamin M.; Su, Andrew I.

    2013-01-01

    Motivation: Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Results: Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume ‘microtasks’ and systems for solving high-difficulty ‘megatasks’. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches. Contact: bgood@scripps.edu PMID:23782614

  4. Recommendation Systems for Geoscience Data Portals Built by Analyzing Usage Patterns

    NASA Astrophysics Data System (ADS)

    Crosby, C.; Nandigam, V.; Baru, C.

    2009-04-01

    Since its launch five years ago, the National Science Foundation-funded GEON Project (www.geongrid.org) has been providing access to a variety of geoscience data sets such as geologic maps and other geographic information system (GIS)-oriented data, paleontologic databases, gravity and magnetics data and LiDAR topography via its online portal interface. In addition to data, the GEON Portal also provides web-based tools and other resources that enable users to process and interact with data. Examples of these tools include functions to dynamically map and integrate GIS data, compute synthetic seismograms, and to produce custom digital elevation models (DEMs) with user defined parameters such as resolution. The GEON portal built on the Gridsphere-portal framework allows us to capture user interaction with the system. In addition to the site access statistics captured by tools like Google Analystics which capture hits per unit time, search key words, operating systems, browsers, and referring sites, we also record additional statistics such as which data sets are being downloaded and in what formats, processing parameters, and navigation pathways through the portal. With over four years of data now available from the GEON Portal, this record of usage is a rich resource for exploring how earth scientists discover and utilize online data sets. Furthermore, we propose that this data could ultimately be harnessed to optimize the way users interact with the data portal, design intelligent processing and data management systems, and to make recommendations on algorithm settings and other available relevant data. The paradigm of integrating popular and commonly used patterns to make recommendations to a user is well established in the world of e-commerce where users receive suggestions on books, music and other products that they may find interesting based on their website browsing and purchasing history, as well as the patterns of fellow users who have made similar

  5. Experiences with Testing the Largest Ground System NASA Has Ever Built

    NASA Technical Reports Server (NTRS)

    Lehtonen, Ken; Messerly, Robert

    2003-01-01

    In the 1980s, the National Aeronautics and Space Administration (NASA) embarked upon a major Earth-focused program called Mission to Planet Earth. The Goddard Space Flight Center (GSFC) was selected to manage and develop a key component - the Earth Observing System (EOS). The EOS consisted of four major missions designed to monitor the Earth. The missions included 4 spacecraft. Terra (launched December 1999), Aqua (launched May 2002), ICESat (Ice, Cloud, and Land Elevation Satellite, launched January 2003), and Aura (scheduled for launch January 2004). The purpose of these missions was to provide support for NASA s long-term research effort for determining how human-induced and natural changes affect our global environment. The EOS Data and Information System (EOSDIS), a globally distributed, large-scale scientific system, was built to support EOS. Its primary function is to capture, collect, process, and distribute the most voluminous set of remotely sensed scientific data to date estimated to be 350 Gbytes per day. The EOSDIS is composed of a diverse set of elements with functional capabilities that require the implementation of a complex set of computers, high-speed networks, mission-unique equipment, and associated Information Technology (IT) software along with mission-specific software. All missions are constrained by schedule, budget, and staffing resources, and rigorous testing has been shown to be critical to the success of each mission. This paper addresses the challenges associated with the planning, test definition. resource scheduling, execution, and discrepancy reporting involved in the mission readiness testing of a ground system on the scale of EOSDIS. The size and complexity of the mission systems supporting the Aqua flight operations, for example, combined with the limited resources available, prompted the project to challenge the prevailing testing culture. The resulting success of the Aqua Mission Readiness Testing (MRT) program was due in no

  6. Bioinformatics for Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  7. Influence of various alternative bedding materials on pododermatitis in broilers raised in a built-up litter system

    USDA-ARS?s Scientific Manuscript database

    Broilers in the United States are frequently raised on built-up litter systems, primarily bedded with pine wood chips (shavings) or sawdust. There is continuing interest in alternative bedding materials as pine products are often in short supply and prices rise accordingly. Alternative bedding mat...

  8. Using Geographic Information Systems (GIS) to assess the role of the built environment in influencing obesity: a glossary.

    PubMed

    Thornton, Lukar E; Pearce, Jamie R; Kavanagh, Anne M

    2011-07-01

    Features of the built environment are increasingly being recognised as potentially important determinants of obesity. This has come about, in part, because of advances in methodological tools such as Geographic Information Systems (GIS). GIS has made the procurement of data related to the built environment easier and given researchers the flexibility to create a new generation of environmental exposure measures such as the travel time to the nearest supermarket or calculations of the amount of neighbourhood greenspace. Given the rapid advances in the availability of GIS data and the relative ease of use of GIS software, a glossary on the use of GIS to assess the built environment is timely. As a case study, we draw on aspects the food and physical activity environments as they might apply to obesity, to define key GIS terms related to data collection, concepts, and the measurement of environmental features.

  9. Pattern recognition in bioinformatics.

    PubMed

    de Ridder, Dick; de Ridder, Jeroen; Reinders, Marcel J T

    2013-09-01

    Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained.

  10. Emerging strengths in Asia Pacific bioinformatics

    PubMed Central

    Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee

    2008-01-01

    The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20–23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts. PMID:19091008

  11. Systems based on photogrammetry to evaluation of built heritage: tentative guidelines and control parameters

    NASA Astrophysics Data System (ADS)

    Valença, J.

    2014-06-01

    Technological innovations based on close-range imaging have arisen. The developments are related with both the advances in mathematical algorithms and acquisition equipment. This evolution allows to acquire data with large and powerful sensors and the fast and efficient processing of data. In general, the preservation of built heritage have applied these technological innovations very successfully in their different areas of intervention, namely, photogrammetry, digital image processing and multispectral image analysis. Furthermore, commercial packages of software and hardware have emerged. Thus, guidelines to best-practice procedures and to validate the results usually obtained should be established. Therefore, simple and easy to understand concepts, even for nonexperts in the field, should relate the characteristics of: (i) objects under study; (ii) acquisition conditions; (iii) methods applied; and (iv) equipment applied. In this scope, the limits of validity of the methods and a comprehensive protocol to achieve the required precision and accuracy for structural analysis is a mandatory task. Application of close-range photogrammetry to build 3D geometric models and for evaluation of displacements are herein presented. Parameters such as distance-to-object, sensor size and focal length, are correlated to the precision and accuracy achieved for displacement in both experimental and on site environment. This paper shows an early stage study. The aim consist in defining simple expressions to estimate the characteristics of the equipment and/or the conditions for image acquisition, depending on the required precision and accuracy. The results will be used to define tentative guidelines considered the all procedure, from image acquisition to final results of coordinates and displacements.

  12. Development of built-in type and noninvasive sensor systems for smart artificial heart.

    PubMed

    Yamagishi, Hiromasa; Sankai, Yoshiyuki; Yamane, Takashi; Jikuya, Tomoaki; Tsutsui, Tatsuo

    2003-01-01

    It is very important to grasp the artificial heart condition and the physiologic conditions for the implantable artificial heart. In our laboratory, a smart artificial heart (SAH) has been proposed and developed. An SAH is an artificial heart with a noninvasive sensor; it is a sensorized and intelligent artificial heart for safe and effective treatment. In this study, the following sensor systems for SAH are described: noninvasive blood temperature sensor system, noninvasive blood pressure sensor system, and noninvasive small blood flow sensor system. These noninvasive sensor systems are integrated and included around the artificial heart to evaluate these sensor systems for SAH by the mockup experiments and the animal experiments. The blood temperature could be measured stably by the temperature sensor system. Aortic pressure was estimated, and sucking condition was detected by the pressure sensor system. The blood flow was measured by the flow meter system within 10% error. As a result of these experiments, we confirmed the effectiveness of the sensor systems for SAH.

  13. Measuring whole-plant transpiration gravimetrically: a scalable automated system built from components

    Treesearch

    Damian Cirelli; Victor J. Lieffers; Melvin T. Tyree

    2012-01-01

    Measuring whole-plant transpiration is highly relevant considering the increasing interest in understanding and improving plant water use at the whole-plant level. We present an original software package (Amalthea) and a design to create a system for measuring transpiration using laboratory balances based on the readily available commodity hardware. The system is...

  14. Surpassing Shanghai: An Agenda for American Education Built on the World's Leading Systems

    ERIC Educational Resources Information Center

    Tucker, Marc S., Ed.

    2011-01-01

    This book answers a simple question: How would one redesign the American education system if the aim was to take advantage of everything that has been learned by countries with the world's best education systems? With a growing number of countries outperforming the United States on the most respected comparisons of student achievement--and…

  15. Investigating CRISPR-Cas systems in Clostridium botulinum via bioinformatics tools.

    PubMed

    Negahdaripour, Manica; Nezafat, Navid; Hajighahramani, Nasim; Rahmatabadi, Seyyed Soheil; Ghasemi, Younes

    2017-10-01

    The Clustered regularly interspaced short palindromic repeats (CRISPR) systems are a type of innate immunity found in some prokaryotes, which protect them against alien genetic elements by targeting foreign nucleic acids. Some other functions are also attributed to these systems. Clostridium botulinum bacteria produce botulinum neurotoxins (BoNT), one of the deadliest known toxins for humans and some animals. Food poisoning due to these bacteria is still a challenge in food industries. On the other hand, BoNT has been widely investigated for therapeutic applications including different muscle disorders. Bont genes may be located on bacterial chromosomes, plasmids, or even prophages. Generally, the genomes of Cl. botulinum show a high level of plasticity. In order to investigate the presence and characteristics of CRISPRs in these anaerobe bacteria, an in silico study on 113 CRISPR arrays identified in 38 Cl. botulinum strains was performed. A high occurrence of CRISPR arrays (80%) were found, with a remarkable frequency on plasmids. Several (CRISPR-associated) Cas proteins from different types were recognized in the studied strains, which were mostly Cas6. The CRISPR-Cas systems were identified as type I or III, but no type II. The spacers showed more homology with bacterial plasmids than phages. Active CRISPR-Cas systems can prevent the transfer of foreign genes, which may also include bont genes. This study provides the first insight into the probable roles of CRISPR-Cas systems in Cl. botulinum strains such as toxigenicity. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A Wearable Context-Aware ECG Monitoring System Integrated with Built-in Kinematic Sensors of the Smartphone

    PubMed Central

    Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye

    2015-01-01

    Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user’s physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone. PMID:25996508

  17. A Wearable Context-Aware ECG Monitoring System Integrated with Built-in Kinematic Sensors of the Smartphone.

    PubMed

    Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye

    2015-05-19

    Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user's physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone.

  18. [Construction and application of bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer].

    PubMed

    Xiang, Fang; Ningqiu, Li; Xiaozhe, Fu; Kaibin, Li; Qiang, Lin; Lihui, Liu; Cunbin, Shi; Shuqin, Wu

    2015-07-01

    As a key component of life science, bioinformatics has been widely applied in genomics, transcriptomics, and proteomics. However, the requirement of high-performance computers rather than common personal computers for constructing a bioinformatics platform significantly limited the application of bioinformatics in aquatic science. In this study, we constructed a bioinformatic analysis platform for aquatic pathogen based on the MilkyWay-2 supercomputer. The platform consisted of three functional modules, including genomic and transcriptomic sequencing data analysis, protein structure prediction, and molecular dynamics simulations. To validate the practicability of the platform, we performed bioinformatic analysis on aquatic pathogenic organisms. For example, genes of Flavobacterium johnsoniae M168 were identified and annotated via Blast searches, GO and InterPro annotations. Protein structural models for five small segments of grass carp reovirus HZ-08 were constructed by homology modeling. Molecular dynamics simulations were performed on out membrane protein A of Aeromonas hydrophila, and the changes of system temperature, total energy, root mean square deviation and conformation of the loops during equilibration were also observed. These results showed that the bioinformatic analysis platform for aquatic pathogen has been successfully built on the MilkyWay-2 supercomputer. This study will provide insights into the construction of bioinformatic analysis platform for other subjects.

  19. Geochemistry of rare earth elements in a passive treatment system built for acid mine drainage remediation.

    PubMed

    Prudêncio, Maria Isabel; Valente, Teresa; Marques, Rosa; Sequeira Braga, Maria Amália; Pamplona, Jorge

    2015-11-01

    Rare earth elements (REE) were used to assess attenuation processes in a passive system for acid mine drainage treatment (Jales, Portugal). Hydrochemical parameters and REE contents in water, soils and sediments were obtained along the treatment system, after summer and winter. A decrease of REE contents in the water resulting from the interaction with limestone after summer occurs; in the wetlands REE are significantly released by the soil particles to the water. After winter, a higher water dynamics favors the AMD treatment effectiveness and performance since REE contents decrease along the system; La and Ce are preferentially sequestered by ochre sludge but released to the water in the wetlands, influencing the REE pattern of the creek water. Thus, REE fractionation occurs in the passive treatment systems and can be used as tracer to follow up and understand the geochemical processes that promote the remediation of AMD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. The ClimaGrowing Footprint of Climate Change: Can Systems Built Today Cope with Tomorrow's Weather Extremes?

    SciTech Connect

    Kintner-Meyer, Michael CW; Kraucunas, Ian P.

    2013-07-11

    This article describes how current climate conditions--with increasingly extreme storms, droughts, and heat waves and their ensuing effects on water quality and levels--are adding stress to an already aging power grid. Moreover, it explains how evaluations of said grid, built upon past weather patterns, are inaqeduate for measuring if the nation's energy systems can cope with future climate changes. The authors make the case for investing in the development of robust, integrated electricity planning tools that account for these climate change factors as a means for enhancing electricity infrastructure resilience.

  1. Analysis of Bioactive Amino Acids from Fish Hydrolysates with a New Bioinformatic Intelligent System Approach.

    PubMed

    Elaziz, Mohamed Abd; Hemdan, Ahmed Monem; Hassanien, AboulElla; Oliva, Diego; Xiong, Shengwu

    2017-09-07

    The current economics of the fish protein industry demand rapid, accurate and expressive prediction algorithms at every step of protein production especially with the challenge of global climate change. This help to predict and analyze functional and nutritional quality then consequently control food allergies in hyper allergic patients. As, it is quite expensive and time-consuming to know these concentrations by the lab experimental tests, especially to conduct large-scale projects. Therefore, this paper introduced a new intelligent algorithm using adaptive neuro-fuzzy inference system based on whale optimization algorithm. This algorithm is used to predict the concentration levels of bioactive amino acids in fish protein hydrolysates at different times during the year. The whale optimization algorithm is used to determine the optimal parameters in adaptive neuro-fuzzy inference system. The results of proposed algorithm are compared with others and it is indicated the higher performance of the proposed algorithm.

  2. BioSig: A bioinformatic system for studying the mechanism of intra-cell signaling

    SciTech Connect

    Parvin, B.; Cong, G.; Fontenay, G.; Taylor, J.; Henshall, R.; Barcellos-Hoff, M.H.

    2000-12-15

    Mapping inter-cell signaling pathways requires an integrated view of experimental and informatic protocols. BioSig provides the foundation of cataloging inter-cell responses as a function of particular conditioning, treatment, staining, etc. for either in vivo or in vitro experiments. This paper outlines the system architecture, a functional data model for representing experimental protocols, algorithms for image analysis, and the required statistical analysis. The architecture provides remote shared operation of an inverted optical microscope, and couples instrument operation with images acquisition and annotation. The information is stored in an object-oriented database. The algorithms extract structural information such as morphology and organization, and map it to functional information such as inter-cellular responses. An example of usage of this system is included.

  3. An object-oriented programming system for the integration of internet-based bioinformatics resources.

    PubMed

    Beveridge, Allan

    2006-01-01

    The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.

  4. Humidity compensation of bad-smell sensing system using a detector tube and a built-in camera

    NASA Astrophysics Data System (ADS)

    Hirano, Hiroyuki; Nakamoto, Takamichi

    2011-09-01

    We developed a low-cost sensing system robust against humidity change for detecting and estimating concentration of bad smell, such as hydrogen sulfide and ammonia. In the previous study, we developed automated measurement system for a gas detector tube using a built-in camera instead of the conventional manual inspection of the gas detector tube. Concentration detectable by the developed system ranges from a few tens of ppb to a few tens of ppm. However, we previously found that the estimated concentration depends not only on actual concentration, but on humidity. Here, we established the method to correct the influence of humidity by creating regression function with its inputs of discoloration rate and humidity. We studied 2 methods (Backpropagation, Radial basis function network) to get regression function and evaluated them. Consequently, the system successfully estimated the concentration on a practical level even when humidity changes.

  5. "A system biology" approach to bioinformatics and functional genomics in complex human diseases: arthritis.

    PubMed

    Attur, M G; Dave, M N; Tsunoyama, K; Akamatsu, M; Kobori, M; Miki, J; Abramson, S B; Katoh, M; Amin, A R

    2002-10-01

    Human and other annotated genome sequences have facilitated generation of vast amounts of correlative data, from human/animal genetics, normal and disease-affected tissues from complex diseases such as arthritis using gene/protein chips and SNP analysis. These data sets include genes/proteins whose functions are partially known at the cellular level or may be completely unknown (e.g. ESTs). Thus, genomic research has transformed molecular biology from "data poor" to "data rich" science, allowing further division into subpopulations of subcellular fractions, which are often given an "-omic" suffix. These disciplines have to converge at a systemic level to examine the structure and dynamics of cellular and organismal function. The challenge of characterizing ESTs linked to complex diseases is like interpreting sharp images on a blurred background and therefore requires a multidimensional screen for functional genomics ("functionomics") in tissues, mice and zebra fish model, which intertwines various approaches and readouts to study development and homeostasis of a system. In summary, the post-genomic era of functionomics will facilitate to narrow the bridge between correlative data and causative data by quaint hypothesis-driven research using a system approach integrating "intercoms" of interacting and interdependent disciplines forming a unified whole as described in this review for Arthritis.

  6. Teaching Folder Management System for the Enhancement of Engineering and Built Environment Faculty Program

    ERIC Educational Resources Information Center

    Ab-Rahman, Mohammad Syuhaimi; Mustaffa, Muhamad Azrin Mohd; Abdul, Nasrul Amir; Yusoff, Abdul Rahman Mohd; Hipni, Afiq

    2015-01-01

    A strong, systematic and well-executed management system will be able to minimize and coordinate workload. A number of committees need to be developed, which are joined by the department staffs to achieve the objectives that have been set. Another important aspect is the monitoring department in order to ensure that the work done is correct and in…

  7. Dynamics of Complex Systems Built as Coupled Physical, Communication and Decision Layers.

    PubMed

    Kühnlenz, Florian; Nardelli, Pedro H J

    2016-01-01

    This paper proposes a simple model to capture the complexity of multilayer systems where their constituent layers affect, and are affected by, each other. The physical layer is a circuit composed by a power source and resistors in parallel. Every individual agent aims at maximizing its own delivered power by adding, removing or keeping the resistors it has; the delivered power is in turn a non-linear function that depends on the other agents' behavior, its own internal state, its global state perception, the information received from its neighbors via the communication network and a randomized selfishness. We develop an agent-based simulation to analyze the effects of number of agents (system size), communication network topology, communication errors and the minimum power gain that triggers a behavioral change on the system dynamic. Our results show that a wave-like behavior at macro-level (caused by individual changes in the decision layer) can only emerge for a specific system size. The ratio between cooperators and defectors depends on the minimum gain assumed-lower minimal gains lead to less cooperation, and vice-versa. Different communication network topologies imply different levels of power utilization and fairness at the physical layer, and a certain level of error in the communication layer induces more cooperation.

  8. Dynamics of Complex Systems Built as Coupled Physical, Communication and Decision Layers

    PubMed Central

    Kühnlenz, Florian; Nardelli, Pedro H. J.

    2016-01-01

    This paper proposes a simple model to capture the complexity of multilayer systems where their constituent layers affect, and are affected by, each other. The physical layer is a circuit composed by a power source and resistors in parallel. Every individual agent aims at maximizing its own delivered power by adding, removing or keeping the resistors it has; the delivered power is in turn a non-linear function that depends on the other agents’ behavior, its own internal state, its global state perception, the information received from its neighbors via the communication network and a randomized selfishness. We develop an agent-based simulation to analyze the effects of number of agents (system size), communication network topology, communication errors and the minimum power gain that triggers a behavioral change on the system dynamic. Our results show that a wave-like behavior at macro-level (caused by individual changes in the decision layer) can only emerge for a specific system size. The ratio between cooperators and defectors depends on the minimum gain assumed—lower minimal gains lead to less cooperation, and vice-versa. Different communication network topologies imply different levels of power utilization and fairness at the physical layer, and a certain level of error in the communication layer induces more cooperation. PMID:26730590

  9. Creating and Using a Computer Networking and Systems Administration Laboratory Built under Relaxed Financial Constraints

    ERIC Educational Resources Information Center

    Conlon, Michael P.; Mullins, Paul

    2011-01-01

    The Computer Science Department at Slippery Rock University created a laboratory for its Computer Networks and System Administration and Security courses under relaxed financial constraints. This paper describes the department's experience designing and using this laboratory, including lessons learned and descriptions of some student projects…

  10. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses

    NASA Astrophysics Data System (ADS)

    Lin, Yu-Pu; Bennett, Christopher H.; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-09-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations.

  11. Computational intelligence techniques in bioinformatics.

    PubMed

    Hassanien, Aboul Ella; Al-Shammari, Eiman Tamah; Ghali, Neveen I

    2013-12-01

    Computational intelligence (CI) is a well-established paradigm with current systems having many of the characteristics of biological computers and capable of performing a variety of tasks that are difficult to do using conventional techniques. It is a methodology involving adaptive mechanisms and/or an ability to learn that facilitate intelligent behavior in complex and changing environments, such that the system is perceived to possess one or more attributes of reason, such as generalization, discovery, association and abstraction. The objective of this article is to present to the CI and bioinformatics research communities some of the state-of-the-art in CI applications to bioinformatics and motivate research in new trend-setting directions. In this article, we present an overview of the CI techniques in bioinformatics. We will show how CI techniques including neural networks, restricted Boltzmann machine, deep belief network, fuzzy logic, rough sets, evolutionary algorithms (EA), genetic algorithms (GA), swarm intelligence, artificial immune systems and support vector machines, could be successfully employed to tackle various problems such as gene expression clustering and classification, protein sequence classification, gene selection, DNA fragment assembly, multiple sequence alignment, and protein function prediction and its structure. We discuss some representative methods to provide inspiring examples to illustrate how CI can be utilized to address these problems and how bioinformatics data can be characterized by CI. Challenges to be addressed and future directions of research are also presented and an extensive bibliography is included. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Evolutionary dynamics of RNA-like replicator systems: A bioinformatic approach to the origin of life.

    PubMed

    Takeuchi, Nobuto; Hogeweg, Paulien

    2012-09-01

    We review computational studies on prebiotic evolution, focusing on informatic processes in RNA-like replicator systems. In particular, we consider the following processes: the maintenance of information by replicators with and without interactions, the acquisition of information by replicators having a complex genotype-phenotype map, the generation of information by replicators having a complex genotype-phenotype-interaction map, and the storage of information by replicators serving as dedicated templates. Focusing on these informatic aspects, we review studies on quasi-species, error threshold, RNA-folding genotype-phenotype map, hypercycle, multilevel selection (including spatial self-organization, classical group selection, and compartmentalization), and the origin of DNA-like replicators. In conclusion, we pose a future question for theoretical studies on the origin of life.

  13. A new bioinformatics approach to natural protein collections: permutation structure contrasts of viral and cellular systems.

    PubMed

    Graham, Daniel J

    2013-04-01

    Biological cells and viruses operate by different replication and symmetry paradigms. Cells are able to replicate independently and express little spatial symmetry; viruses require cells for replication while manifesting high symmetry. The author inquired whether different paradigms were reflected in the permutations of amino acid sequences. The hypothesis was that the permutation structure level and symmetry within viral protein collections exceed that of living cells. The rationale was that one symmetry aspect generally accompanies and promotes others in a system. The inquiry was readily answered given abundant sequence archives for proteins. The analysis of collections from diverse viral and cellular sources lends strong support. Additional insights into protein primary structure, the design of collections, and the role of information are provided as well.

  14. Evolutionary Dynamics of RNA-like Replicator Systems: A Bioinformatic Approach to the Origin of Life✩

    PubMed Central

    Takeuchi, Nobuto; Hogeweg, Paulien

    2012-01-01

    We review computational studies on prebiotic evolution, focusing on informatic processes in RNA-like replicator systems. In particular, we consider the following processes: the maintenance of information by replicators with and without interactions, the acquisition of information by replicators having a complex genotype-phenotype map, the generation of information by replicators having a complex genotype-phenotype-interaction map, and the storage of information by replicators serving as dedicated templates. Focusing on these informatic aspects, we review studies on quasi-species, error threshold, RNA-folding genotype-phenotype map, hypercycle, multilevel selection (including spatial self-organization, classical group selection, and compartmentalization), and the origin of DNA-like replicators. In conclusion, we pose a future question for theoretical studies on the origin of life. PMID:22727399

  15. INTEGRATION OF SYSTEMS GLYCOBIOLOGY WITH BIOINFORMATICS TOOLBOXES, GLYCOINFORMATICS RESOURCES AND GLYCOPROTEOMICS DATA

    PubMed Central

    Liu, Gang; Neelamegham, Sriram

    2015-01-01

    The glycome constitutes the entire complement of free carbohydrates and glycoconjugates expressed on whole cells or tissues. ‘Systems Glycobiology’ is an emerging discipline that aims to quantitatively describe and analyse the glycome. Here, instead of developing a detailed understanding of single biochemical processes, a combination of computational and experimental tools are used to seek an integrated or ‘systems-level’ view. This can explain how multiple biochemical reactions and transport processes interact with each other to control glycome biosynthesis and function. Computational methods in this field commonly build in silico reaction network models to describe experimental data derived from structural studies that measure cell-surface glycan distribution. While considerable progress has been made, several challenges remain due to the complex and heterogeneous nature of this post-translational modification. First, for the in silico models to be standardized and shared among laboratories, it is necessary to integrate glycan structure information and glycosylation-related enzyme definitions into the mathematical models. Second, as glycoinformatics resources grow, it would be attractive to utilize ‘Big Data’ stored in these repositories for model construction and validation. Third, while the technology for profiling the glycome at the whole-cell level has been standardized, there is a need to integrate mass spectrometry derived site-specific glycosylation data into the models. The current review discusses progress that is being made to resolve the above bottlenecks. The focus is on how computational models can bridge the gap between ‘data’ generated in wet-laboratory studies with ‘knowledge’ that can enhance our understanding of the glycome. PMID:25871730

  16. Computational Systems Bioinformatics and Bioimaging for Pathway Analysis and Drug Screening

    PubMed Central

    Zhou, Xiaobo; Wong, Stephen T. C.

    2009-01-01

    The premise of today’s drug development is that the mechanism of a disease is highly dependent upon underlying signaling and cellular pathways. Such pathways are often composed of complexes of physically interacting genes, proteins, or biochemical activities coordinated by metabolic intermediates, ions, and other small solutes and are investigated with molecular biology approaches in genomics, proteomics, and metabonomics. Nevertheless, the recent declines in the pharmaceutical industry’s revenues indicate such approaches alone may not be adequate in creating successful new drugs. Our observation is that combining methods of genomics, proteomics, and metabonomics with techniques of bioimaging will systematically provide powerful means to decode or better understand molecular interactions and pathways that lead to disease and potentially generate new insights and indications for drug targets. The former methods provide the profiles of genes, proteins, and metabolites, whereas the latter techniques generate objective, quantitative phenotypes correlating to the molecular profiles and interactions. In this paper, we describe pathway reconstruction and target validation based on the proposed systems biologic approach and show selected application examples for pathway analysis and drug screening. PMID:20011613

  17. Volarea - a bioinformatics tool to calculate the surface area and the volume of molecular systems.

    PubMed

    Ribeiro, João V; Tamames, Juan A C; Cerqueira, Nuno M F S A; Fernandes, Pedro A; Ramos, Maria J

    2013-12-01

    We have developed a computer program named 'VolArea' that allows for a rapid and fully automated analysis of molecular structures. The software calculates the surface area and the volume of molecular structures, as well as the volume of molecular cavities. The surface area facility can be used to calculate the solvent-exposed surface area of a molecule or the contact area between two molecules. The volume algorithm can be used to predict not only the space occupied by any molecular structure, but also the volume of cavities, such as tunnels or clefts. The software finds wide application in the characterization of systems, such as protein/ligand complexes, enzyme active sites, protein/protein interfaces, enzyme channels, membrane pores, solvent tunnels, among others. Some examples are given to illustrate its potential. VolArea is as a plug-in of the widely distributed software Visual Molecular Dynamics (VMD) and is freely available at http://www.fc.up.pt/PortoBioComp/Software/Volarea/Home.html. © 2013 John Wiley & Sons A/S.

  18. Computational Systems Bioinformatics and Bioimaging for Pathway Analysis and Drug Screening.

    PubMed

    Zhou, Xiaobo; Wong, Stephen T C

    2008-08-01

    The premise of today's drug development is that the mechanism of a disease is highly dependent upon underlying signaling and cellular pathways. Such pathways are often composed of complexes of physically interacting genes, proteins, or biochemical activities coordinated by metabolic intermediates, ions, and other small solutes and are investigated with molecular biology approaches in genomics, proteomics, and metabonomics. Nevertheless, the recent declines in the pharmaceutical industry's revenues indicate such approaches alone may not be adequate in creating successful new drugs. Our observation is that combining methods of genomics, proteomics, and metabonomics with techniques of bioimaging will systematically provide powerful means to decode or better understand molecular interactions and pathways that lead to disease and potentially generate new insights and indications for drug targets. The former methods provide the profiles of genes, proteins, and metabolites, whereas the latter techniques generate objective, quantitative phenotypes correlating to the molecular profiles and interactions. In this paper, we describe pathway reconstruction and target validation based on the proposed systems biologic approach and show selected application examples for pathway analysis and drug screening.

  19. Bioinformatics meets parasitology.

    PubMed

    Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B

    2012-05-01

    The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention.

  20. Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses

    PubMed Central

    Lin, Yu-Pu; Bennett, Christopher H.; Cabaret, Théo; Vodenicarevic, Damir; Chabi, Djaafar; Querlioz, Damien; Jousselme, Bruno; Derycke, Vincent; Klein, Jacques-Olivier

    2016-01-01

    Multiple modern applications of electronics call for inexpensive chips that can perform complex operations on natural data with limited energy. A vision for accomplishing this is implementing hardware neural networks, which fuse computation and memory, with low cost organic electronics. A challenge, however, is the implementation of synapses (analog memories) composed of such materials. In this work, we introduce robust, fastly programmable, nonvolatile organic memristive nanodevices based on electrografted redox complexes that implement synapses thanks to a wide range of accessible intermediate conductivity states. We demonstrate experimentally an elementary neural network, capable of learning functions, which combines four pairs of organic memristors as synapses and conventional electronics as neurons. Our architecture is highly resilient to issues caused by imperfect devices. It tolerates inter-device variability and an adaptable learning rule offers immunity against asymmetries in device switching. Highly compliant with conventional fabrication processes, the system can be extended to larger computing systems capable of complex cognitive tasks, as demonstrated in complementary simulations. PMID:27601088

  1. A SPECT system simulator built on the SolidWorks (TM) 3D-Design package.

    PubMed

    Li, Xin; Furenlid, Lars R

    2014-08-17

    We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design workflow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorks (TM) -created stereolithography (.STL) representations with a full complement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorks (TM) and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system.

  2. A SPECT system simulator built on the SolidWorksTM 3D design package

    NASA Astrophysics Data System (ADS)

    Li, Xin; Furenlid, Lars R.

    2014-09-01

    We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design work flow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorksTM-created stereolithography (.STL) representations with a full com- plement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorksTM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system.

  3. A SPECT system simulator built on the SolidWorksTM 3D-Design package

    PubMed Central

    Li, Xin; Furenlid, Lars R.

    2015-01-01

    We have developed a GPU-accelerated SPECT system simulator that integrates into instrument-design workflow [1]. This simulator includes a gamma-ray tracing module that can rapidly propagate gamma-ray photons through arbitrary apertures modeled by SolidWorksTM-created stereolithography (.STL) representations with a full complement of physics cross sections [2, 3]. This software also contains a scintillation detector simulation module that can model a scintillation detector with arbitrary scintillation crystal shape and light-sensor arrangement. The gamma-ray tracing module enables us to efficiently model aperture and detector crystals in SolidWorksTM and save them as STL file format, then load the STL-format model into this module to generate list-mode results of interacted gamma-ray photon information (interaction positions and energies) inside the detector crystals. The Monte-Carlo scintillation detector simulation module enables us to simulate how scintillation photons get reflected, refracted and absorbed inside a scintillation detector, which contributes to more accurate simulation of a SPECT system. PMID:26190885

  4. Dynamical energy analysis for built-up acoustic systems at high frequencies.

    PubMed

    Chappell, D J; Giani, S; Tanner, G

    2011-09-01

    Standard methods for describing the intensity distribution of mechanical and acoustic wave fields in the high frequency asymptotic limit are often based on flow transport equations. Common techniques are statistical energy analysis, employed mostly in the context of vibro-acoustics, and ray tracing, a popular tool in architectural acoustics. Dynamical energy analysis makes it possible to interpolate between standard statistical energy analysis and full ray tracing, containing both of these methods as limiting cases. In this work a version of dynamical energy analysis based on a Chebyshev basis expansion of the Perron-Frobenius operator governing the ray dynamics is introduced. It is shown that the technique can efficiently deal with multi-component systems overcoming typical geometrical limitations present in statistical energy analysis. Results are compared with state-of-the-art hp-adaptive discontinuous Galerkin finite element simulations.

  5. Low-cost Fresnel microlens array fabricated by a home-built maskless lithography system

    NASA Astrophysics Data System (ADS)

    Cirino, G. A.; Lopera, S. A.; Montagnoli, A. N.; Neto, L. G.; Mansano, R. D.

    2012-10-01

    This work presents the fabrication of a high fill factor Fresnel microlens array (MLA) by employing a low-cost homebuilt maskless exposure lithographic system. A phase relief structure was generated on a photoresist-coated silicon wafer, replicated in Polydimethylsiloxane (PDMS) and electrostatically bonded to a glass substrate. Optical characterization was based on the evaluation of the maximum intensity of each spot generated at the MLA focal plane as well as its full width at half maximum (FWHM) intensity values. The resulting FWHM and maximum intensity spot mean values were 50 ± 8% μm and 0.71 +/- 7% a.u , respectively. Such a MLA can be applied as Shack-Hartmann wavefront sensors, in optical interconnects and to enhance the efficiency of detector arrays.

  6. The Feasibility of "oscar" as AN Information System for Sustainable Rehabilitation of Built Heritage

    NASA Astrophysics Data System (ADS)

    Farmer, C.; Rouillard, C.

    2017-08-01

    This paper aims to examine the feasibility of the Online Sustainable Conservation Assistance Resource (OSCAR) as an information system and framework to help find appropriate ways to improve the sustainable performance of heritage buildings in North America. The paper reviews the need for holistic comprehensive authoritative information in the field of sustainable conservation, how OSCAR addresses this gap, the OSCAR workflow, and how it was used in two case studies. It was found that OSCAR has potential to become a practical educational tool and design aide to address the sustainable performance of heritage buildings. The paper contributes to the discourse on sustainable conservation by examining resources and tools which address the need for holistic retrofit approaches. The findings will be useful to educators and professionals in the fields of sustainable design and heritage conservation.

  7. Development of a purpose built landfill system for the control of methane emissions from municipal solid waste.

    PubMed

    Yedla, Sudhakar; Parikh, Jyoti K

    2002-01-01

    In the present paper, a new system of purpose built landfill (PBLF) has been proposed for the control of methane emissions from municipal solid waste (MSW), by considering all favourable conditions for improved methane generation in tropical climates. Based on certain theoretical considerations multivariate functional models (MFMs) are developed to estimate methane mitigation and energy generating potential of the proposed system. Comparison was made between the existing waste management system and proposed PBLF system. It has been found that the proposed methodology not only controlled methane emissions to the atmosphere but also could yield considerable energy in terms of landfill gas (LFG). Economic feasibility of the proposed system has been tested by comparing unit cost of waste disposal in conventional as well as PBLF systems. In a case study of MSW management in Mumbai (INDIA), it was found that the unit cost of waste disposal with PBLF system is seven times lesser than that of the conventional waste management system. The proposed system showed promising energy generation potential with production of methane worth of Rs. 244 millions/y ($5.2 million/y). Thus, the new waste management methodology could give an adaptable solution for the conflict between development, environmental degradation and natural resources depletion.

  8. Robotic Laparoendoscopic Single-site Retroperitioneal Renal Surgery: Initial Investigation of a Purpose-built Single-port Surgical System.

    PubMed

    Maurice, Matthew J; Ramirez, Daniel; Kaouk, Jihad H

    2017-04-01

    Robotic single-site retroperitoneal renal surgery has the potential to minimize the morbidity of standard transperitoneal and multiport approaches. Traditionally, technological limitations of non-purpose-built robotic platforms have hindered the application of this approach. To assess the feasibility of retroperitoneal renal surgery using a new purpose-built robotic single-port surgical system. This was a preclinical study using three male cadavers to assess the feasibility of the da Vinci SP1098 surgical system for robotic laparoendoscopic single-site (R-LESS) retroperitoneal renal surgery. We used the SP1098 to perform retroperitoneal R-LESS radical nephrectomy (n=1) and bilateral partial nephrectomy (n=4) on the anterior and posterior surfaces of the kidney. Improvements unique to this system include enhanced optics and intelligent instrument arm control. Access was obtained 2cm anterior and inferior to the tip of the 12th rib using a novel 2.5-cm robotic single-port system that accommodates three double-jointed articulating robotic instruments, an articulating camera, and an assistant port. The primary outcome was the technical feasibility of the procedures, as measured by the need for conversion to standard techniques, intraoperative complications, and operative times. All cases were completed without the need for conversion. There were no intraoperative complications. The operative time was 100min for radical nephrectomy, and the mean operative time was 91.8±18.5min for partial nephrectomy. Limitations include the preclinical model, the small sample size, and the lack of a control group. Single-site retroperitoneal renal surgery is feasible using the latest-generation SP1098 robotic platform. While the potential of the SP1098 appears promising, further study is needed for clinical evaluation of this investigational technology. In an experimental model, we used a new robotic system to successfully perform major surgery on the kidney through a single small

  9. Crimean-Congo Hemorrhagic Fever Virus Gn Bioinformatic Analysis and Construction of a Recombinant Bacmid in Order to Express Gn by Baculovirus Expression System

    PubMed Central

    Rahpeyma, Mehdi; Fotouhi, Fatemeh; Makvandi, Manouchehr; Ghadiri, Ata; Samarbaf-Zadeh, Alireza

    2015-01-01

    Background Crimean-Congo hemorrhagic fever virus (CCHFV) is a member of the nairovirus, a genus in the Bunyaviridae family, which causes a life threatening disease in human. Currently, there is no vaccine against CCHFV and detailed structural analysis of CCHFV proteins remains undefined. The CCHFV M RNA segment encodes two viral surface glycoproteins known as Gn and Gc. Viral glycoproteins can be considered as key targets for vaccine development. Objectives The current study aimed to investigate structural bioinformatics of CCHFV Gn protein and design a construct to make a recombinant bacmid to express by baculovirus system. Materials and Methods To express the Gn protein in insect cells that can be used as antigen in animal model vaccine studies. Bioinformatic analysis of CCHFV Gn protein was performed and designed a construct and cloned into pFastBacHTb vector and a recombinant Gn-bacmid was generated by Bac to Bac system. Results Primary, secondary, and 3D structure of CCHFV Gn were obtained and PCR reaction with M13 forward and reverse primers confirmed the generation of recombinant bacmid DNA harboring Gn coding region under polyhedron promoter. Conclusions Characterization of the detailed structure of CCHFV Gn by bioinformatics software provides the basis for development of new experiments and construction of a recombinant bacmid harboring CCHFV Gn, which is valuable for designing a recombinant vaccine against deadly pathogens like CCHFV. PMID:26862379

  10. Component-Based Approach for Educating Students in Bioinformatics

    ERIC Educational Resources Information Center

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  11. Component-Based Approach for Educating Students in Bioinformatics

    ERIC Educational Resources Information Center

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  12. Clinical Bioinformatics: challenges and opportunities

    PubMed Central

    2012-01-01

    Background Network Tools and Applications in Biology (NETTAB) Workshops are a series of meetings focused on the most promising and innovative ICT tools and to their usefulness in Bioinformatics. The NETTAB 2011 workshop, held in Pavia, Italy, in October 2011 was aimed at presenting some of the most relevant methods, tools and infrastructures that are nowadays available for Clinical Bioinformatics (CBI), the research field that deals with clinical applications of bioinformatics. Methods In this editorial, the viewpoints and opinions of three world CBI leaders, who have been invited to participate in a panel discussion of the NETTAB workshop on the next challenges and future opportunities of this field, are reported. These include the development of data warehouses and ICT infrastructures for data sharing, the definition of standards for sharing phenotypic data and the implementation of novel tools to implement efficient search computing solutions. Results Some of the most important design features of a CBI-ICT infrastructure are presented, including data warehousing, modularity and flexibility, open-source development, semantic interoperability, integrated search and retrieval of -omics information. Conclusions Clinical Bioinformatics goals are ambitious. Many factors, including the availability of high-throughput "-omics" technologies and equipment, the widespread availability of clinical data warehouses and the noteworthy increase in data storage and computational power of the most recent ICT systems, justify research and efforts in this domain, which promises to be a crucial leveraging factor for biomedical research. PMID:23095472

  13. Bioinformatics education in India.

    PubMed

    Kulkarni-Kale, Urmila; Sawant, Sangeeta; Chavan, Vishwas

    2010-11-01

    An account of bioinformatics education in India is presented along with future prospects. Establishment of BTIS network by Department of Biotechnology (DBT), Government of India in the 1980s had been a systematic effort in the development of bioinformatics infrastructure in India to provide services to scientific community. Advances in the field of bioinformatics underpinned the need for well-trained professionals with skills in information technology and biotechnology. As a result, programmes for capacity building in terms of human resource development were initiated. Educational programmes gradually evolved from the organisation of short-term workshops to the institution of formal diploma/degree programmes. A case study of the Master's degree course offered at the Bioinformatics Centre, University of Pune is discussed. Currently, many universities and institutes are offering bioinformatics courses at different levels with variations in the course contents and degree of detailing. BioInformatics National Certification (BINC) examination initiated in 2005 by DBT provides a common yardstick to assess the knowledge and skill sets of students passing out of various institutions. The potential for broadening the scope of bioinformatics to transform it into a data intensive discovery discipline is discussed. This necessitates introduction of amendments in the existing curricula to accommodate the upcoming developments.

  14. A hybrid wave propagation and statistical energy analysis on the mid-frequency vibration of built-up plate systems

    NASA Astrophysics Data System (ADS)

    Ma, Yongbin; Zhang, Yahui; Kennedy, David

    2015-09-01

    Based on the concept of the hybrid finite element (FE) analysis and statistical energy analysis (SEA), a new hybrid method is developed for the mid-frequency vibration of a system comprising rectangular thin plates. The wave propagation method based on symplectic analysis is used to describe the vibration of the deterministic plate component. By enforcing the displacement continuity and equilibrium of force at the connection interface, the dynamic coupling between the deterministic plate component and the statistical plate component described by SEA is established. Furthermore, the hybrid solution formulation for the mid-frequency vibration of the system built up by plates is proposed. The symplectic analytical wave describing the deterministic plate component eliminates the boundary condition limitation of the traditional analytical wave propagation method and overcomes the numerical instability of numerical wave propagation methods. Numerical examples compare results from the proposed method with those from the hybrid FE-SEA method and the Monte Carlo method. The comparison illustrates that the proposed method gives good predictions for the mid-frequency behavior of the system considered here with low computational time. In addition, a constant proportionality coefficient between the system coupling power and the energy difference between the plate components can be found, when external forces are applied at different locations on a line perpendicular to the wave propagation direction. Based on this finding, two fast solution techniques are developed for the energy response of the system, and are validated by numerical examples.

  15. An Arch-Shaped Intraoral Tongue Drive System with Built-in Tongue-Computer Interfacing SoC

    PubMed Central

    Park, Hangue; Ghovanloo, Maysam

    2014-01-01

    We present a new arch-shaped intraoral Tongue Drive System (iTDS) designed to occupy the buccal shelf in the user's mouth. The new arch-shaped iTDS, which will be referred to as the iTDS-2, incorporates a system-on-a-chip (SoC) that amplifies and digitizes the raw magnetic sensor data and sends it wirelessly to an external TDS universal interface (TDS-UI) via an inductive coil or a planar inverted-F antenna. A built-in transmitter (Tx) employs a dual-band radio that operates at either 27 MHz or 432 MHz band, according to the wireless link quality. A built-in super-regenerative receiver (SR-Rx) monitors the wireless link quality and switches the band if the link quality is below a predetermined threshold. An accompanying ultra-low power FPGA generates data packets for the Tx and handles digital control functions. The custom-designed TDS-UI receives raw magnetic sensor data from the iTDS-2, recognizes the intended user commands by the sensor signal processing (SSP) algorithm running in a smartphone, and delivers the classified commands to the target devices, such as a personal computer or a powered wheelchair. We evaluated the iTDS-2 prototype using center-out and maze navigation tasks on two human subjects, which proved its functionality. The subjects' performance with the iTDS-2 was improved by 22% over its predecessor, reported in our earlier publication. PMID:25405513

  16. Effect of Second-Hand Tobacco Smoke on the Nitration of Brain Proteins: A Systems Biology and Bioinformatics Approach.

    PubMed

    Kobeissy, Firas H; Guingab-Cagmat, Joy; Bruijnzeel, Adriaan W; Gold, Mark S; Wang, Kevin

    2017-01-01

    Second-hand smoke (SHS) exposure leads to the death of approximately 48,000 nonsmokers per year in the United States alone. SHS exposure has been associated with cardiovascular, respiratory, and neurodegenerative diseases. While cardiac function abnormalities and lung cancer due to SHS have been well characterized, brain injury due to SHS has not undergone a full systematic evaluation. Oxidative stress and nitration have been associated with smoking and SHS exposure. Animal studies suggest that exposure to tobacco smoke increases oxidative stress. Oxidative stress is characterized by an increase in reactive oxygen and nitrogen species (ROS/RNS). Among the oxidative mechanisms affecting protein functionality is the posttranslational modification (PTM)-mediated tyrosine nitration. Protein tyrosine nitration, a covalent posttranslational modification, is commonly used as a marker of cellular oxidative stress associated with the pathogenesis of several neurodegenerative diseases. In our previous published work, the utility of a targeted proteomic approach has been evaluated to identify two brain abundant proteins in an in vivo SHS rat model namely the GAPDH and UCH-L1. In this current study, mass spectrometric-based proteomic and complementary biochemical methods were used to characterize the SHS-induced brain nitroproteome followed by bioinformatics/systems biology approach analysis to characterize protein interaction map. Sprague Dawley rats were exposed to SHS for 5 weeks and then cortical tissues were collected. Nitroprotein enrichment was performed via 3-Nitro tyrosine (3-NT) immunoprecipitation of brain lysates proteins. Protein nitration was validated via Western blotting to confirm the presence of nitroproteins complemented by gel-free neuroproteomic analysis by data-dependent LC-MS/MS. We identified 29 differentially expressed proteins in the 3-NT-enriched samples; seven of these proteins were unique to SHS exposure. Network analysis revealed an association of

  17. Bioinformatics and Cancer

    Cancer.gov

    Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.

  18. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    PubMed

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2016-03-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations.

  19. Deep learning in bioinformatics.

    PubMed

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2016-07-29

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.

  20. The resilience of structure built around the predicate: Homesign gesture systems in Turkish and American deaf children

    PubMed Central

    Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Aslı; Sancar, Burcu

    2013-01-01

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called homesigns, that have many of the properties of natural language—the so-called resilient properties of language. We explored the resilience of structure built around the predicate—in particular, how manner and path are mapped onto the verb—in homesign systems developed by deaf children in Turkey and the United States. We also asked whether the Turkish homesigners exhibit sentence-level structures previously identified as resilient in American and Chinese homesigners. We found that the Turkish and American deaf children used not only the same production probability and ordering patterns to indicate who does what to whom, but also the same segmentation and conflation patterns to package manner and path. The gestures that the hearing parents produced did not, for the most part, display the patterns found in the children’s gestures. Although co-speech gesture may provide the building blocks for homesign, it does not provide the blueprint for these resilient properties of language. PMID:25663828

  1. 2K09 and thereafter : the coming era of integrative bioinformatics, systems biology and intelligent computing for functional genomics and personalized medicine research.

    PubMed

    Yang, Jack Y; Niemierko, Andrzej; Bajcsy, Ruzena; Xu, Dong; Athey, Brian D; Zhang, Aidong; Ersoy, Okan K; Li, Guo-Zheng; Borodovsky, Mark; Zhang, Joe C; Arabnia, Hamid R; Deng, Youping; Dunker, A Keith; Liu, Yunlong; Ghafoor, Arif

    2010-12-01

    Significant interest exists in establishing synergistic research in bioinformatics, systems biology and intelligent computing. Supported by the United States National Science Foundation (NSF), International Society of Intelligent Biological Medicine (http://www.ISIBM.org), International Journal of Computational Biology and Drug Design (IJCBDD) and International Journal of Functional Informatics and Personalized Medicine, the ISIBM International Joint Conferences on Bioinformatics, Systems Biology and Intelligent Computing (ISIBM IJCBS 2009) attracted more than 300 papers and 400 researchers and medical doctors world-wide. It was the only inter/multidisciplinary conference aimed to promote synergistic research and education in bioinformatics, systems biology and intelligent computing. The conference committee was very grateful for the valuable advice and suggestions from honorary chairs, steering committee members and scientific leaders including Dr. Michael S. Waterman (USC, Member of United States National Academy of Sciences), Dr. Chih-Ming Ho (UCLA, Member of United States National Academy of Engineering and Academician of Academia Sinica), Dr. Wing H. Wong (Stanford, Member of United States National Academy of Sciences), Dr. Ruzena Bajcsy (UC Berkeley, Member of United States National Academy of Engineering and Member of United States Institute of Medicine of the National Academies), Dr. Mary Qu Yang (United States National Institutes of Health and Oak Ridge, DOE), Dr. Andrzej Niemierko (Harvard), Dr. A. Keith Dunker (Indiana), Dr. Brian D. Athey (Michigan), Dr. Weida Tong (FDA, United States Department of Health and Human Services), Dr. Cathy H. Wu (Georgetown), Dr. Dong Xu (Missouri), Drs. Arif Ghafoor and Okan K Ersoy (Purdue), Dr. Mark Borodovsky (Georgia Tech, President of ISIBM), Dr. Hamid R. Arabnia (UGA, Vice-President of ISIBM), and other scientific leaders. The committee presented the 2009 ISIBM Outstanding Achievement Awards to Dr. Joydeep Ghosh (UT

  2. 2K09 and thereafter : the coming era of integrative bioinformatics, systems biology and intelligent computing for functional genomics and personalized medicine research

    PubMed Central

    2010-01-01

    Significant interest exists in establishing synergistic research in bioinformatics, systems biology and intelligent computing. Supported by the United States National Science Foundation (NSF), International Society of Intelligent Biological Medicine (http://www.ISIBM.org), International Journal of Computational Biology and Drug Design (IJCBDD) and International Journal of Functional Informatics and Personalized Medicine, the ISIBM International Joint Conferences on Bioinformatics, Systems Biology and Intelligent Computing (ISIBM IJCBS 2009) attracted more than 300 papers and 400 researchers and medical doctors world-wide. It was the only inter/multidisciplinary conference aimed to promote synergistic research and education in bioinformatics, systems biology and intelligent computing. The conference committee was very grateful for the valuable advice and suggestions from honorary chairs, steering committee members and scientific leaders including Dr. Michael S. Waterman (USC, Member of United States National Academy of Sciences), Dr. Chih-Ming Ho (UCLA, Member of United States National Academy of Engineering and Academician of Academia Sinica), Dr. Wing H. Wong (Stanford, Member of United States National Academy of Sciences), Dr. Ruzena Bajcsy (UC Berkeley, Member of United States National Academy of Engineering and Member of United States Institute of Medicine of the National Academies), Dr. Mary Qu Yang (United States National Institutes of Health and Oak Ridge, DOE), Dr. Andrzej Niemierko (Harvard), Dr. A. Keith Dunker (Indiana), Dr. Brian D. Athey (Michigan), Dr. Weida Tong (FDA, United States Department of Health and Human Services), Dr. Cathy H. Wu (Georgetown), Dr. Dong Xu (Missouri), Drs. Arif Ghafoor and Okan K Ersoy (Purdue), Dr. Mark Borodovsky (Georgia Tech, President of ISIBM), Dr. Hamid R. Arabnia (UGA, Vice-President of ISIBM), and other scientific leaders. The committee presented the 2009 ISIBM Outstanding Achievement Awards to Dr. Joydeep Ghosh (UT

  3. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  4. A Bioinformatics Facility for NASA

    NASA Technical Reports Server (NTRS)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  5. Microbial bioinformatics 2020.

    PubMed

    Pallen, Mark J

    2016-09-01

    Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  6. String Mining in Bioinformatics

    NASA Astrophysics Data System (ADS)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  7. String Mining in Bioinformatics

    NASA Astrophysics Data System (ADS)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word “data-mining” is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  8. Efficient azo dye decolorization in a continuous stirred tank reactor (CSTR) with built-in bioelectrochemical system.

    PubMed

    Cui, Min-Hua; Cui, Dan; Gao, Lei; Cheng, Hao-Yi; Wang, Ai-Jie

    2016-10-01

    A continuous stirred tank reactor with built-in bioelectrochemical system (CSTR-BES) was developed for azo dye Alizarin Yellow R (AYR) containing wastewater treatment. The decolorization efficiency (DE) of the CSTR-BES was 97.04±0.06% for 7h with sludge concentration of 3000mg/L and initial AYR concentration of 100mg/L, which was superior to that of the sole CSTR mode (open circuit: 54.87±4.34%) and the sole BES mode (without sludge addition: 91.37±0.44%). The effects of sludge concentration and sodium acetate (NaAc) concentration on azo dye decolorization were investigated. The highest DE of CSTR-BES for 4h was 87.66±2.93% with sludge concentration of 12,000mg/L, NaAc concentration of 2000mg/L and initial AYR concentration of 100mg/L. The results in this study indicated that CSTR-BES could be a practical strategy for upgrading conventional anaerobic facilities against refractory wastewater treatment.

  9. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    PubMed Central

    Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the potential advancement of research and development in complex biomedical systems has created a need for an educated workforce in bioinformatics. However, effectively integrating bioinformatics education through formal and informal educational settings has been a challenge due in part to its cross-disciplinary nature. In this article, we seek to provide an overview of the state of bioinformatics education. This article identifies: 1) current approaches of bioinformatics education at the undergraduate and graduate levels; 2) the most common concepts and skills being taught in bioinformatics education; 3) pedagogical approaches and methods of delivery for conveying bioinformatics concepts and skills; and 4) assessment results on the impact of these programs, approaches, and methods in students’ attitudes or learning. Based on these findings, it is our goal to describe the landscape of scholarly work in this area and, as a result, identify opportunities and challenges in bioinformatics education. PMID:25452484

  10. Chemistry in Bioinformatics

    PubMed Central

    Murray-Rust, Peter; Mitchell, John BO; Rzepa, Henry S

    2005-01-01

    Chemical information is now seen as critical for most areas of life sciences. But unlike Bioinformatics, where data is openly available and freely re-usable, most chemical information is closed and cannot be re-distributed without permission. This has led to a failure to adopt modern informatics and software techniques and therefore paucity of chemistry in bioinformatics. New technology, however, offers the hope of making chemical data (compounds and properties) free during the authoring process. We argue that the technology is already available; we require a collective agreement to enhance publication protocols. PMID:15941476

  11. An Online Bioinformatics Curriculum

    PubMed Central

    Searls, David B.

    2012-01-01

    Online learning initiatives over the past decade have become increasingly comprehensive in their selection of courses and sophisticated in their presentation, culminating in the recent announcement of a number of consortium and startup activities that promise to make a university education on the internet, free of charge, a real possibility. At this pivotal moment it is appropriate to explore the potential for obtaining comprehensive bioinformatics training with currently existing free video resources. This article presents such a bioinformatics curriculum in the form of a virtual course catalog, together with editorial commentary, and an assessment of strengths, weaknesses, and likely future directions for open online learning in this field. PMID:23028269

  12. Glossary of bioinformatics terms.

    PubMed

    2007-06-01

    This collection of terms and definitions commonly encountered in the bioinformatics literature will be updated periodically as Current Protocols in Bioinformatics grows. In addition, an extensive glossary of genetic terms can be found on the Web site of the National Human Genome Research Institute (http://www.genome.gov/glossary.cfm). The entries in that online glossary provide a brief written definition of the term; the user can also listen to an informative explanation of the term using RealAudio or the Windows Media Player.

  13. The influence of the built environment on outcomes from a “walking school bus study”: a cross-sectional analysis using geographical information systems

    PubMed Central

    Oreskovic, Nicolas M.; Blossom, Jeff; Robinson, Alyssa I.; Chen, Minghua L.; Uscanga, Doris K.; Mendoza, Jason A.

    2015-01-01

    Active commuting to school increases children’s daily physical activity. The built environment is associated with children’s physical activity levels in cross-sectional studies. This study examined the role of the built environment on the outcomes of a “walking school bus” study. Geographical information systems was used to map out and compare the built environments around schools participating in a pilot walking school bus randomised controlled trial, as well as along school routes. Multi-level modelling was used to determine the built environment attributes associated with the outcomes of active commuting to school and accelerometer-determined moderate-to-vigorous physical activity (MPVA). There were no differences in the surrounding built environments of control (n = 4) and intervention (n = 4) schools participating in the walking school bus study. Among school walking routes, park space was inversely associated with active commuting to school (β = −0.008, SE = 0.004, P = 0.03), while mixed-land use was positively associated with daily MPVA (β = 60.0, SE = 24.3, P = 0.02). There was effect modification such that high traffic volume and high street connectivity were associated with greater moderate-to-vigorous physical activity. The results of this study suggest that the built environment may play a role in active school commuting outcomes and daily physical activity. PMID:25545924

  14. Bioinformatics and School Biology

    ERIC Educational Resources Information Center

    Dalpech, Roger

    2006-01-01

    The rapidly changing field of bioinformatics is fuelling the need for suitably trained personnel with skills in relevant biological "sub-disciplines" such as proteomics, transcriptomics and metabolomics, etc. But because of the complexity--and sheer weight of data--associated with these new areas of biology, many school teachers feel…

  15. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  16. Computer Simulation of Embryonic Systems: What can a virtual embryo teach us about developmental toxicity? (LA Conference on Computational Biology & Bioinformatics)

    EPA Science Inventory

    This presentation will cover work at EPA under the CSS program for: (1) Virtual Tissue Models built from the known biology of an embryological system and structured to recapitulate key cell signals and responses; (2) running the models with real (in vitro) or synthetic (in silico...

  17. Computer Simulation of Embryonic Systems: What can a virtual embryo teach us about developmental toxicity? (LA Conference on Computational Biology & Bioinformatics)

    EPA Science Inventory

    This presentation will cover work at EPA under the CSS program for: (1) Virtual Tissue Models built from the known biology of an embryological system and structured to recapitulate key cell signals and responses; (2) running the models with real (in vitro) or synthetic (in silico...

  18. In the Spotlight: Bioinformatics

    PubMed Central

    Wang, May Dongmei

    2016-01-01

    During 2012, next generation sequencing (NGS) has attracted great attention in the biomedical research community, especially for personalized medicine. Also, third generation sequencing has become available. Therefore, state-of-art sequencing technology and analysis are reviewed in this Bioinformatics spotlight on 2012. Next-generation sequencing (NGS) is high-throughput nucleic acid sequencing technology with wide dynamic range and single base resolution. The full promise of NGS depends on the optimization of NGS platforms, sequence alignment and assembly algorithms, data analytics, novel algorithms for integrating NGS data with existing genomic, proteomic, or metabolomic data, and quantitative assessment of NGS technology in comparing to more established technologies such as microarrays. NGS technology has been predicated to become a cornerstone of personalized medicine. It is argued that NGS is a promising field for motivated young researchers who are looking for opportunities in bioinformatics. PMID:23192635

  19. Distributed computing in bioinformatics.

    PubMed

    Jain, Eric

    2002-01-01

    This paper provides an overview of methods and current applications of distributed computing in bioinformatics. Distributed computing is a strategy of dividing a large workload among multiple computers to reduce processing time, or to make use of resources such as programs and databases that are not available on all computers. Participating computers may be connected either through a local high-speed network or through the Internet.

  20. Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools

    PubMed Central

    Diaz Acosta, B.

    2011-01-01

    The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.

  1. Development of a cloud-based Bioinformatics Training Platform

    PubMed Central

    Revote, Jerico; Watson-Haigh, Nathan S.; Quenette, Steve; Bethwaite, Blair; McGrath, Annette

    2017-01-01

    Abstract The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. PMID:27084333

  2. Phylogenetic trees in bioinformatics

    SciTech Connect

    Burr, Tom L

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  3. Development of a 3D Underground Cadastral System with Indoor Mapping for As-Built BIM: The Case Study of Gangnam Subway Station in Korea

    PubMed Central

    Kim, Sangmin; Kim, Jeonghyun; Jung, Jaehoon; Heo, Joon

    2015-01-01

    The cadastral system provides land ownership information by registering and representing land boundaries on a map. The current cadastral system in Korea, however, focuses mainly on the management of 2D land-surface boundaries. It is not yet possible to provide efficient or reliable land administration, as this 2D system cannot support or manage land information on 3D properties (including architectures and civil infrastructures) for both above-ground and underground facilities. A geometrical model of the 3D parcel, therefore, is required for registration of 3D properties. This paper, considering the role of the cadastral system, proposes a framework for a 3D underground cadastral system that can register various types of 3D underground properties using indoor mapping for as-built Building Information Modeling (BIM). The implementation consists of four phases: (1) geometric modeling of a real underground infrastructure using terrestrial laser scanning data; (2) implementation of as-built BIM based on geometric modeling results; (3) accuracy assessment for created as-built BIM using reference points acquired by total station; and (4) creation of three types of 3D underground cadastral map to represent underground properties. The experimental results, based on indoor mapping for as-built BIM, show that the proposed framework for a 3D underground cadastral system is able to register the rights, responsibilities, and restrictions corresponding to the 3D underground properties. In this way, clearly identifying the underground physical situation enables more reliable and effective decision-making in all aspects of the national land administration system. PMID:26690174

  4. Development of a 3D Underground Cadastral System with Indoor Mapping for As-Built BIM: The Case Study of Gangnam Subway Station in Korea.

    PubMed

    Kim, Sangmin; Kim, Jeonghyun; Jung, Jaehoon; Heo, Joon

    2015-12-09

    The cadastral system provides land ownership information by registering and representing land boundaries on a map. The current cadastral system in Korea, however, focuses mainly on the management of 2D land-surface boundaries. It is not yet possible to provide efficient or reliable land administration, as this 2D system cannot support or manage land information on 3D properties (including architectures and civil infrastructures) for both above-ground and underground facilities. A geometrical model of the 3D parcel, therefore, is required for registration of 3D properties. This paper, considering the role of the cadastral system, proposes a framework for a 3D underground cadastral system that can register various types of 3D underground properties using indoor mapping for as-built Building Information Modeling (BIM). The implementation consists of four phases: (1) geometric modeling of a real underground infrastructure using terrestrial laser scanning data; (2) implementation of as-built BIM based on geometric modeling results; (3) accuracy assessment for created as-built BIM using reference points acquired by total station; and (4) creation of three types of 3D underground cadastral map to represent underground properties. The experimental results, based on indoor mapping for as-built BIM, show that the proposed framework for a 3D underground cadastral system is able to register the rights, responsibilities, and restrictions corresponding to the 3D underground properties. In this way, clearly identifying the underground physical situation enables more reliable and effective decision-making in all aspects of the national land administration system.

  5. Temporal Patterns in Sheep Fetal Heart Rate Variability Correlate to Systemic Cytokine Inflammatory Response: A Methodological Exploration of Monitoring Potential Using Complex Signals Bioinformatics

    PubMed Central

    Wu, Hau-Tieng; Durosier, Lucien D.; Desrochers, André; Fecteau, Gilles; Seely, Andrew J. E.; Frasch, Martin G.

    2016-01-01

    Fetal inflammation is associated with increased risk for postnatal organ injuries. No means of early detection exist. We hypothesized that systemic fetal inflammation leads to distinct alterations of fetal heart rate variability (fHRV). We tested this hypothesis deploying a novel series of approaches from complex signals bioinformatics. In chronically instrumented near-term fetal sheep, we induced an inflammatory response with lipopolysaccharide (LPS) injected intravenously (n = 10) observing it over 54 hours; seven additional fetuses served as controls. Fifty-one fHRV measures were determined continuously every 5 minutes using Continuous Individualized Multi-organ Variability Analysis (CIMVA). CIMVA creates an fHRV measures matrix across five signal-analytical domains, thus describing complementary properties of fHRV. We implemented, validated and tested methodology to obtain a subset of CIMVA fHRV measures that matched best the temporal profile of the inflammatory cytokine IL-6. In the LPS group, IL-6 peaked at 3 hours. For the LPS, but not control group, a sharp increase in standardized difference in variability with respect to baseline levels was observed between 3 h and 6 h abating to baseline levels, thus tracking closely the IL-6 inflammatory profile. We derived fHRV inflammatory index (FII) consisting of 15 fHRV measures reflecting the fetal inflammatory response with prediction accuracy of 90%. Hierarchical clustering validated the selection of 14 out of 15 fHRV measures comprising FII. We developed methodology to identify a distinctive subset of fHRV measures that tracks inflammation over time. The broader potential of this bioinformatics approach is discussed to detect physiological responses encoded in HRV measures. PMID:27100089

  6. Highlighting computations in bioscience and bioinformatics: review of the Symposium of Computations in Bioinformatics and Bioscience (SCBB07).

    PubMed

    Lu, Guoqing; Ni, Jun

    2008-05-28

    The Second Symposium on Computations in Bioinformatics and Bioscience (SCBB07) was held in Iowa City, Iowa, USA, on August 13-15, 2007. This annual event attracted dozens of bioinformatics professionals and students, who are interested in solving emerging computational problems in bioscience, from China, Japan, Taiwan and the United States. The Scientific Committee of the symposium selected 18 peer-reviewed papers for publication in this supplemental issue of BMC Bioinformatics. These papers cover a broad spectrum of topics in computational biology and bioinformatics, including DNA, protein and genome sequence analysis, gene expression and microarray analysis, computational proteomics and protein structure classification, systems biology and machine learning.

  7. Development of Bioinformatics Infrastructure for Genomics Research.

    PubMed

    Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem

    2017-06-01

    Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for

  8. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    PubMed

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2015-06-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Bioinformatics of prokaryotic RNAs.

    PubMed

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes.

  10. Bioinformatics of prokaryotic RNAs

    PubMed Central

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes. PMID:24755880

  11. A survey of scholarly literature describing the field of bioinformatics education and bioinformatics educational research.

    PubMed

    Magana, Alejandra J; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the potential advancement of research and development in complex biomedical systems has created a need for an educated workforce in bioinformatics. However, effectively integrating bioinformatics education through formal and informal educational settings has been a challenge due in part to its cross-disciplinary nature. In this article, we seek to provide an overview of the state of bioinformatics education. This article identifies: 1) current approaches of bioinformatics education at the undergraduate and graduate levels; 2) the most common concepts and skills being taught in bioinformatics education; 3) pedagogical approaches and methods of delivery for conveying bioinformatics concepts and skills; and 4) assessment results on the impact of these programs, approaches, and methods in students' attitudes or learning. Based on these findings, it is our goal to describe the landscape of scholarly work in this area and, as a result, identify opportunities and challenges in bioinformatics education. © 2014 A. J. Magana et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  12. 4273π: bioinformatics education on low cost ARM hardware.

    PubMed

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  13. 4273π: Bioinformatics education on low cost ARM hardware

    PubMed Central

    2013-01-01

    Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194

  14. Bioinformatics-Aided Venomics

    PubMed Central

    Kaas, Quentin; Craik, David J.

    2015-01-01

    Venomics is a modern approach that combines transcriptomics and proteomics to explore the toxin content of venoms. This review will give an overview of computational approaches that have been created to classify and consolidate venomics data, as well as algorithms that have helped discovery and analysis of toxin nucleic acid and protein sequences, toxin three-dimensional structures and toxin functions. Bioinformatics is used to tackle specific challenges associated with the identification and annotations of toxins. Recognizing toxin transcript sequences among second generation sequencing data cannot rely only on basic sequence similarity because toxins are highly divergent. Mass spectrometry sequencing of mature toxins is challenging because toxins can display a large number of post-translational modifications. Identifying the mature toxin region in toxin precursor sequences requires the prediction of the cleavage sites of proprotein convertases, most of which are unknown or not well characterized. Tracing the evolutionary relationships between toxins should consider specific mechanisms of rapid evolution as well as interactions between predatory animals and prey. Rapidly determining the activity of toxins is the main bottleneck in venomics discovery, but some recent bioinformatics and molecular modeling approaches give hope that accurate predictions of toxin specificity could be made in the near future. PMID:26110505

  15. Comprehensive Decision Tree Models in Bioinformatics

    PubMed Central

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class

  16. Using bioinformatics and systems genetics to dissect HDL-cholesterol genetics in an MRL/MpJ x SM/J intercross.

    PubMed

    Leduc, Magalie S; Blair, Rachael Hageman; Verdugo, Ricardo A; Tsaih, Shirng-Wern; Walsh, Kenneth; Churchill, Gary A; Paigen, Beverly

    2012-06-01

    A higher incidence of coronary artery disease is associated with a lower level of HDL-cholesterol. We searched for genetic loci influencing HDL-cholesterol in F2 mice from a cross between MRL/MpJ and SM/J mice. Quantitative trait loci (QTL) mapping revealed one significant HDL QTL (Apoa2 locus), four suggestive QTL on chromosomes 10, 11, 13, and 18 and four additional QTL on chromosomes 1 proximal, 3, 4, and 7 after adjusting HDL for the strong Apoa2 locus. A novel nonsynonymous polymorphism supports Lipg as the QTL gene for the chromosome 18 QTL, and a difference in Abca1 expression in liver tissue supports it as the QTL gene for the chromosome 4 QTL. Using weighted gene co-expression network analysis, we identified a module that after adjustment for Apoa2, correlated with HDL, was genetically determined by a QTL on chromosome 11, and overlapped with the HDL QTL. A combination of bioinformatics tools and systems genetics helped identify several candidate genes for both the chromosome 11 HDL and module QTL based on differential expression between the parental strains, cis regulation of expression, and causality modeling. We conclude that integrating systems genetics to a more-traditional genetics approach improves the power of complex trait gene identification.

  17. Bioinformatics and Moonlighting Proteins

    PubMed Central

    Hernández, Sergio; Franco, Luís; Calvo, Alejandra; Ferragut, Gabriela; Hermoso, Antoni; Amela, Isaac; Gómez, Antonio; Querol, Enrique; Cedano, Juan

    2015-01-01

    Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyze and describe several approaches that use sequences, structures, interactomics, and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are (a) remote homology searches using Psi-Blast, (b) detection of functional motifs and domains, (c) analysis of data from protein–protein interaction databases (PPIs), (d) match the query protein sequence to 3D databases (i.e., algorithms as PISITE), and (e) mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs) has the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations – it requires the existence of multialigned family protein sequences – but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/), previously published by our group, has been used as a benchmark for the all of the analyses. PMID:26157797

  18. Virtual Bioinformatics Distance Learning Suite

    ERIC Educational Resources Information Center

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  19. Virtual Bioinformatics Distance Learning Suite

    ERIC Educational Resources Information Center

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  20. Design of Soil Salinity Policies with Tinamit, a Flexible and Rapid Tool to Couple Stakeholder-Built System Dynamics Models with Physically-Based Models

    NASA Astrophysics Data System (ADS)

    Malard, J. J.; Baig, A. I.; Hassanzadeh, E.; Adamowski, J. F.; Tuy, H.; Melgar-Quiñonez, H.

    2016-12-01

    Model coupling is a crucial step to constructing many environmental models, as it allows for the integration of independently-built models representing different system sub-components to simulate the entire system. Model coupling has been of particular interest in combining socioeconomic System Dynamics (SD) models, whose visual interface facilitates their direct use by stakeholders, with more complex physically-based models of the environmental system. However, model coupling processes are often cumbersome and inflexible and require extensive programming knowledge, limiting their potential for continued use by stakeholders in policy design and analysis after the end of the project. Here, we present Tinamit, a flexible Python-based model-coupling software tool whose easy-to-use API and graphical user interface make the coupling of stakeholder-built SD models with physically-based models rapid, flexible and simple for users with limited to no coding knowledge. The flexibility of the system allows end users to modify the SD model as well as the linking variables between the two models themselves with no need for recoding. We use Tinamit to couple a stakeholder-built socioeconomic model of soil salinization in Pakistan with the physically-based soil salinity model SAHYSMOD. As climate extremes increase in the region, policies to slow or reverse soil salinity buildup are increasing in urgency and must take both socioeconomic and biophysical spheres into account. We use the Tinamit-coupled model to test the impact of integrated policy options (economic and regulatory incentives to farmers) on soil salinity in the region in the face of future climate change scenarios. Use of the Tinamit model allowed for rapid and flexible coupling of the two models, allowing the end user to continue making model structure and policy changes. In addition, the clear interface (in contrast to most model coupling code) makes the final coupled model easily accessible to stakeholders with

  1. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  2. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, J.L.

    1995-04-11

    A laser initiated ordnance controller apparatus which provides a safe and arm scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition. 6 figures.

  3. No moving parts safe & arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    DOEpatents

    Hendrix, James L.

    1995-01-01

    A laser initiated ordnance controller apparatus which provides a safe and m scheme with no moving parts. The safe & arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe & arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activates the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel, and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel. Simultaneous event channels may also be utilized by optically splitting a single event channel. The built-in-test may be performed anytime prior to ordnance ignition.

  4. Combining chemoinformatics with bioinformatics: in silico prediction of bacterial flavor-forming pathways by a chemical systems biology approach "reverse pathway engineering".

    PubMed

    Liu, Mengjin; Bienfait, Bruno; Sacher, Oliver; Gasteiger, Johann; Siezen, Roland J; Nauta, Arjen; Geurts, Jan M W

    2014-01-01

    The incompleteness of genome-scale metabolic models is a major bottleneck for systems biology approaches, which are based on large numbers of metabolites as identified and quantified by metabolomics. Many of the revealed secondary metabolites and/or their derivatives, such as flavor compounds, are non-essential in metabolism, and many of their synthesis pathways are unknown. In this study, we describe a novel approach, Reverse Pathway Engineering (RPE), which combines chemoinformatics and bioinformatics analyses, to predict the "missing links" between compounds of interest and their possible metabolic precursors by providing plausible chemical and/or enzymatic reactions. We demonstrate the added-value of the approach by using flavor-forming pathways in lactic acid bacteria (LAB) as an example. Established metabolic routes leading to the formation of flavor compounds from leucine were successfully replicated. Novel reactions involved in flavor formation, i.e. the conversion of alpha-hydroxy-isocaproate to 3-methylbutanoic acid and the synthesis of dimethyl sulfide, as well as the involved enzymes were successfully predicted. These new insights into the flavor-formation mechanisms in LAB can have a significant impact on improving the control of aroma formation in fermented food products. Since the input reaction databases and compounds are highly flexible, the RPE approach can be easily extended to a broad spectrum of applications, amongst others health/disease biomarker discovery as well as synthetic biology.

  5. MEMOSys: Bioinformatics platform for genome-scale metabolic models

    PubMed Central

    2011-01-01

    Background Recent advances in genomic sequencing have enabled the use of genome sequencing in standard biological and biotechnological research projects. The challenge is how to integrate the large amount of data in order to gain novel biological insights. One way to leverage sequence data is to use genome-scale metabolic models. We have therefore designed and implemented a bioinformatics platform which supports the development of such metabolic models. Results MEMOSys (MEtabolic MOdel research and development System) is a versatile platform for the management, storage, and development of genome-scale metabolic models. It supports the development of new models by providing a built-in version control system which offers access to the complete developmental history. Moreover, the integrated web board, the authorization system, and the definition of user roles allow collaborations across departments and institutions. Research on existing models is facilitated by a search system, references to external databases, and a feature-rich comparison mechanism. MEMOSys provides customizable data exchange mechanisms using the SBML format to enable analysis in external tools. The web application is based on the Java EE framework and offers an intuitive user interface. It currently contains six annotated microbial metabolic models. Conclusions We have developed a web-based system designed to provide researchers a novel application facilitating the management and development of metabolic models. The system is freely available at http://www.icbi.at/MEMOSys. PMID:21276275

  6. No moving parts safe and arm apparatus and method with monitoring and built-in-test for optical firing of explosive systems

    SciTech Connect

    Hendrix, J.L.

    1994-12-31

    A laser initiated ordnance controller apparatus which provides a safe and arm scheme with no moving parts. The safe and arm apparatus provides isolation of firing energy to explosive devices using a combination of polarization isolation and control through acousto-optical deviation of laser energy pulses. The apparatus provides constant monitoring of the systems status and performs 100% built-in-test at any time prior to ordnance ignition without the risk of premature ignition or detonation. The apparatus has a computer controller, a solid state laser, an acousto-optic deflector and RF drive circuitry, built-in-test optics and electronics, and system monitoring capabilities. The optical system is completed from the laser beam power source to the pyrotechnic ordnance through fiber optic cabling, optical splitters and optical connectors. During operation of the apparatus, a command is provided by the computer controller and, simultaneous with laser flashlamp fire, the safe and arm device is opened for approximately 200 microseconds which allows the laser pulse to transmit through the device. The arm signal also energizes the laser power supply and activated the acousto-optical deflector. When the correct fire format command is received, the acousto-optic deflector moves to the selected event channel and the channel is verified to ensure the system is pointing to the correct position. Laser energy is transmitted through the fiber where an ignitor or detonator designed to be sensitive to optical pulses is fired at the end of the fiber channel.

  7. Translational bioinformatics in psychoneuroimmunology: methods and applications.

    PubMed

    Yan, Qing

    2012-01-01

    Translational bioinformatics plays an indispensable role in transforming psychoneuroimmunology (PNI) into personalized medicine. It provides a powerful method to bridge the gaps between various knowledge domains in PNI and systems biology. Translational bioinformatics methods at various systems levels can facilitate pattern recognition, and expedite and validate the discovery of systemic biomarkers to allow their incorporation into clinical trials and outcome assessments. Analysis of the correlations between genotypes and phenotypes including the behavioral-based profiles will contribute to the transition from the disease-based medicine to human-centered medicine. Translational bioinformatics would also enable the establishment of predictive models for patient responses to diseases, vaccines, and drugs. In PNI research, the development of systems biology models such as those of the neurons would play a critical role. Methods based on data integration, data mining, and knowledge representation are essential elements in building health information systems such as electronic health records and computerized decision support systems. Data integration of genes, pathophysiology, and behaviors are needed for a broad range of PNI studies. Knowledge discovery approaches such as network-based systems biology methods are valuable in studying the cross-talks among pathways in various brain regions involved in disorders such as Alzheimer's disease.

  8. Case study : performance of a house built on a treated wood foundation system in a cold climate

    Treesearch

    Charles G. Carll; Charles R. Boardman; Collin L. Olson

    2010-01-01

    Performance attributes of a home, constructed in 2001 in Madison, WI, on a treated-wood foundation system were investigated over a multiyear period. Temperature conditions in the basement of the building were, without exception, comfortable, even though the basement was not provided with supply registers for heating or cooling. Basement humidity conditions were...

  9. The Resilience of Structure Built around the Predicate: Homesign Gesture Systems in Turkish and American Deaf Children

    ERIC Educational Resources Information Center

    Goldin-Meadow, Susan; Namboodiripad, Savithry; Mylander, Carolyn; Özyürek, Asli; Sancar, Burcu

    2015-01-01

    Deaf children whose hearing losses prevent them from accessing spoken language and whose hearing parents have not exposed them to sign language develop gesture systems, called "homesigns", which have many of the properties of natural language--the so-called resilient properties of language. We explored the resilience of structure built…

  10. The Space Launch System -The Biggest, Most Capable Rocket Ever Built, for Entirely New Human Exploration Missions Beyond Earth's Orbit

    NASA Technical Reports Server (NTRS)

    Shivers, C. Herb

    2012-01-01

    NASA is developing the Space Launch System -- an advanced heavy-lift launch vehicle that will provide an entirely new capability for human exploration beyond Earth's orbit. The Space Launch System will provide a safe, affordable and sustainable means of reaching beyond our current limits and opening up new discoveries from the unique vantage point of space. The first developmental flight, or mission, is targeted for the end of 2017. The Space Launch System, or SLS, will be designed to carry the Orion Multi-Purpose Crew Vehicle, as well as important cargo, equipment and science experiments to Earth's orbit and destinations beyond. Additionally, the SLS will serve as a backup for commercial and international partner transportation services to the International Space Station. The SLS rocket will incorporate technological investments from the Space Shuttle Program and the Constellation Program in order to take advantage of proven hardware and cutting-edge tooling and manufacturing technology that will significantly reduce development and operations costs. The rocket will use a liquid hydrogen and liquid oxygen propulsion system, which will include the RS-25D/E from the Space Shuttle Program for the core stage and the J-2X engine for the upper stage. SLS will also use solid rocket boosters for the initial development flights, while follow-on boosters will be competed based on performance requirements and affordability considerations.

  11. Design study of a high-resolution breast-dedicated PET system built from cadmium zinc telluride detectors

    PubMed Central

    Peng, Hao; Levin, Craig S

    2013-01-01

    We studied the performance of a dual-panel positron emission tomography (PET) camera dedicated to breast cancer imaging using Monte Carlo simulation. The proposed system consists of two 4 cm thick 12 × 15 cm2 area cadmium zinc telluride (CZT) panels with adjustable separation, which can be put in close proximity to the breast and/or axillary nodes. Unique characteristics distinguishing the proposed system from previous efforts in breast-dedicated PET instrumentation are the deployment of CZT detectors with superior spatial and energy resolution, using a cross-strip electrode readout scheme to enable 3D positioning of individual photon interaction coordinates in the CZT, which includes directly measured photon depth-of-interaction (DOI), and arranging the detector slabs edge-on with respect to incoming 511 keV photons for high photon sensitivity. The simulation results show that the proposed CZT dual-panel PET system is able to achieve superior performance in terms of photon sensitivity, noise equivalent count rate, spatial resolution and lesion visualization. The proposed system is expected to achieve ~32% photon sensitivity for a point source at the center and a 4 cm panel separation. For a simplified breast phantom adjacent to heart and torso compartments, the peak noise equivalent count (NEC) rate is predicted to be ~94.2 kcts s−1 (breast volume: 720 cm3 and activity concentration: 3.7 kBq cm−3) for a ~10% energy window around 511 keV and ~8 ns coincidence time window. The system achieves 1 mm intrinsic spatial resolution anywhere between the two panels with a 4 cm panel separation if the detectors have DOI resolution less than 2 mm. For a 3 mm DOI resolution, the system exhibits excellent sphere resolution uniformity (σrms/mean) ≤ 10%) across a 4 cm width FOV. Simulation results indicate that the system exhibits superior hot sphere visualization and is expected to visualize 2 mm diameter spheres with a 5:1 activity concentration ratio within roughly 7

  12. Design study of a high-resolution breast-dedicated PET system built from cadmium zinc telluride detectors

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Levin, Craig S.

    2010-05-01

    We studied the performance of a dual-panel positron emission tomography (PET) camera dedicated to breast cancer imaging using Monte Carlo simulation. The proposed system consists of two 4 cm thick 12 × 15 cm2 area cadmium zinc telluride (CZT) panels with adjustable separation, which can be put in close proximity to the breast and/or axillary nodes. Unique characteristics distinguishing the proposed system from previous efforts in breast-dedicated PET instrumentation are the deployment of CZT detectors with superior spatial and energy resolution, using a cross-strip electrode readout scheme to enable 3D positioning of individual photon interaction coordinates in the CZT, which includes directly measured photon depth-of-interaction (DOI), and arranging the detector slabs edge-on with respect to incoming 511 keV photons for high photon sensitivity. The simulation results show that the proposed CZT dual-panel PET system is able to achieve superior performance in terms of photon sensitivity, noise equivalent count rate, spatial resolution and lesion visualization. The proposed system is expected to achieve ~32% photon sensitivity for a point source at the center and a 4 cm panel separation. For a simplified breast phantom adjacent to heart and torso compartments, the peak noise equivalent count (NEC) rate is predicted to be ~94.2 kcts s-1 (breast volume: 720 cm3 and activity concentration: 3.7 kBq cm-3) for a ~10% energy window around 511 keV and ~8 ns coincidence time window. The system achieves 1 mm intrinsic spatial resolution anywhere between the two panels with a 4 cm panel separation if the detectors have DOI resolution less than 2 mm. For a 3 mm DOI resolution, the system exhibits excellent sphere resolution uniformity (σrms/mean) <= 10%) across a 4 cm width FOV. Simulation results indicate that the system exhibits superior hot sphere visualization and is expected to visualize 2 mm diameter spheres with a 5:1 activity concentration ratio within roughly 7 min

  13. Design study of a high-resolution breast-dedicated PET system built from cadmium zinc telluride detectors.

    PubMed

    Peng, Hao; Levin, Craig S

    2010-05-07

    We studied the performance of a dual-panel positron emission tomography (PET) camera dedicated to breast cancer imaging using Monte Carlo simulation. The proposed system consists of two 4 cm thick 12 x 15 cm(2) area cadmium zinc telluride (CZT) panels with adjustable separation, which can be put in close proximity to the breast and/or axillary nodes. Unique characteristics distinguishing the proposed system from previous efforts in breast-dedicated PET instrumentation are the deployment of CZT detectors with superior spatial and energy resolution, using a cross-strip electrode readout scheme to enable 3D positioning of individual photon interaction coordinates in the CZT, which includes directly measured photon depth-of-interaction (DOI), and arranging the detector slabs edge-on with respect to incoming 511 keV photons for high photon sensitivity. The simulation results show that the proposed CZT dual-panel PET system is able to achieve superior performance in terms of photon sensitivity, noise equivalent count rate, spatial resolution and lesion visualization. The proposed system is expected to achieve approximately 32% photon sensitivity for a point source at the center and a 4 cm panel separation. For a simplified breast phantom adjacent to heart and torso compartments, the peak noise equivalent count (NEC) rate is predicted to be approximately 94.2 kcts s(-1) (breast volume: 720 cm(3) and activity concentration: 3.7 kBq cm(-3)) for a approximately 10% energy window around 511 keV and approximately 8 ns coincidence time window. The system achieves 1 mm intrinsic spatial resolution anywhere between the two panels with a 4 cm panel separation if the detectors have DOI resolution less than 2 mm. For a 3 mm DOI resolution, the system exhibits excellent sphere resolution uniformity (sigma(rms)/mean) < or = 10%) across a 4 cm width FOV. Simulation results indicate that the system exhibits superior hot sphere visualization and is expected to visualize 2 mm diameter

  14. RNA Bioinformatics for Precision Medicine.

    PubMed

    Chen, Jiajia; Shen, Bairong

    2016-01-01

    The high-throughput transcriptomic data generated by deep sequencing technologies urgently require bioinformatics methods for proper data visualization, analysis, storage, and interpretation. The involvement of noncoding RNAs in human diseases highlights their potential as biomarkers and therapeutic targets to facilitate the precision medicine. In this chapter, we give a brief overview of the bioinformatics tools to analyze different aspects of RNAs, in particular ncRNAs. We first describe the emerging bioinformatics methods for RNA identification, structure modeling, functional annotation, and network inference. This is followed by an introduction of potential usefulness of ncRNAs as diagnostic, prognostic biomarkers and therapeutic strategies.

  15. Towards an International Planetary Community Built on Open Source Software: the Evolution of the Planetary Data System

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Ramirez, P.; Hardman, S.; Hughes, J. S.

    2012-12-01

    Access to the worldwide planetary science research results from robotic exploration of the solar system has become a key driver in internationalizing the data standards from the Planetary Data System. The Planetary Data System, through international agency collaborations with the International Planetary Data Alliance (IPDA), has been developing a next generation set of data standards and technical implementation known as PDS4. PDS4 modernizes the PDS towards a world-wide online data system providing data and technical standards for improving access and interoperability among planetary archives. Since 2006, the IPDA has been working with the PDS to ensure that the next generation PDS is capable of allowing agency autonomy in building compatible archives while providing mechanisms to link the archive together. At the 7th International Planetary Data Alliance (IPDA) Meeting in Bangalore, India, the IPDA discussed and passed a resolution paving the way to adopt the PDS4 data standards. While the PDS4 standards have matured, another effort has been underway to move the PDS, a set of distributed discipline oriented science nodes, into a fully, online, service-oriented architecture. In order to accomplish this goal, the PDS has been developing a core set of software components that form the basis for many of the functions needed by a data system. These include the ability to harvest, validate, register, search and distribute the data products defined by the PDS4 data standards. Rather than having each group build their own independent implementations, the intention is to ultimately govern the implementation of this software through an open source community. This will enable not only sharing of software among U.S. planetary science nodes, but also has the potential of improving collaboration not only on core data management software, but also the tools by the international community. This presentation will discuss the progress in developing an open source infrastructure

  16. Global computing for bioinformatics.

    PubMed

    Loewe, Laurence

    2002-12-01

    Global computing, the collaboration of idle PCs via the Internet in a SETI@home style, emerges as a new way of massive parallel multiprocessing with potentially enormous CPU power. Its relations to the broader, fast-moving field of Grid computing are discussed without attempting a review of the latter. This review (i) includes a short table of milestones in global computing history, (ii) lists opportunities global computing offers for bioinformatics, (iii) describes the structure of problems well suited for such an approach, (iv) analyses the anatomy of successful projects and (v) points to existing software frameworks. Finally, an evaluation of the various costs shows that global computing indeed has merit, if the problem to be solved is already coded appropriately and a suitable global computing framework can be found. Then, either significant amounts of computing power can be recruited from the general public, or--if employed in an enterprise-wide Intranet for security reasons--idle desktop PCs can substitute for an expensive dedicated cluster.

  17. Evaluation of a photovoltaic energy mechatronics system with a built-in quadratic maximum power point tracking algorithm

    SciTech Connect

    Chao, R.M.; Ko, S.H.; Lin, I.H.; Pai, F.S.; Chang, C.C.

    2009-12-15

    The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)

  18. Proliferation and ajmalicine biosynthesis of Catharanthus roseus (L). G. Don adventitious roots in self-built temporary immersion system

    NASA Astrophysics Data System (ADS)

    Phuc, Vo Thanh; Trung, Nguyen Minh; Thien, Huynh Tri; Tien, Le Thi Thuy

    2017-09-01

    Periwinkle (Catharanthus roseus (L.) G. Don) is a medicinal plant containing about 130 types of alkaloids that have important pharmacological effects. Ajmalicine in periwinkle root is an antihypertensive drug used in treatment of high blood pressure. Adventitious roots obtained from periwinkle leaves of in vitro shoots grew well in quarter-strength MS medium supplemented with 0.3 mg/l IBA and 20 g/l sucrose. Dark condition was more suitable for root growth than light. However, callus formation also took place in addition to the growth of adventitious roots. Temporary immersion system was applied in the culture of adventitious roots in order to reduce the callus growth rate formed in shake flask cultures. The highest growth index of roots was achieved using the system with 5-min immersion every 45 min (1.676 ± 0.041). The roots cultured in this system grew well without callus formation. Ajmalicine content was highest in the roots cultured with 5-min immersion every 180 min (950 μg/g dry weight).

  19. Autophagy Regulatory Network — A systems-level bioinformatics resource for studying the mechanism and regulation of autophagy

    PubMed Central

    Türei, Dénes; Földvári-Nagy, László; Fazekas, Dávid; Módos, Dezső; Kubisch, János; Kadlecsik, Tamás; Demeter, Amanda; Lenti, Katalin; Csermely, Péter; Vellai, Tibor; Korcsmáros, Tamás

    2015-01-01

    Autophagy is a complex cellular process having multiple roles, depending on tissue, physiological, or pathological conditions. Major post-translational regulators of autophagy are well known, however, they have not yet been collected comprehensively. The precise and context-dependent regulation of autophagy necessitates additional regulators, including transcriptional and post-transcriptional components that are listed in various datasets. Prompted by the lack of systems-level autophagy-related information, we manually collected the literature and integrated external resources to gain a high coverage autophagy database. We developed an online resource, Autophagy Regulatory Network (ARN; http://autophagy-regulation.org), to provide an integrated and systems-level database for autophagy research. ARN contains manually curated, imported, and predicted interactions of autophagy components (1,485 proteins with 4,013 interactions) in humans. We listed 413 transcription factors and 386 miRNAs that could regulate autophagy components or their protein regulators. We also connected the above-mentioned autophagy components and regulators with signaling pathways from the SignaLink 2 resource. The user-friendly website of ARN allows researchers without computational background to search, browse, and download the database. The database can be downloaded in SQL, CSV, BioPAX, SBML, PSI-MI, and in a Cytoscape CYS file formats. ARN has the potential to facilitate the experimental validation of novel autophagy components and regulators. In addition, ARN helps the investigation of transcription factors, miRNAs and signaling pathways implicated in the control of the autophagic pathway. The list of such known and predicted regulators could be important in pharmacological attempts against cancer and neurodegenerative diseases. PMID:25635527

  20. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    PubMed

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  1. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software

    PubMed Central

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  2. Modular Zero Energy. BrightBuilt Home

    SciTech Connect

    Aldrich, Robb; Butterfield, Karla

    2016-03-01

    With funding from the Building America Program, part of the U.S. Department of Energy Building Technologies Office, the Consortium for Advanced Residential Buildings (CARB) worked with BrightBuilt Home (BBH) to evaluate and optimize building systems. CARB’s work focused on a home built by Black Bros. Builders in Lincolnville, Maine (International Energy Conservation Code Climate Zone 6). As with most BBH projects to date, modular boxes were built by Keiser Homes in Oxford, Maine.

  3. Natural and built environmental exposures on children's active school travel: A Dutch global positioning system-based cross-sectional study.

    PubMed

    Helbich, Marco; Emmichoven, Maarten J Zeylmans van; Dijst, Martin J; Kwan, Mei-Po; Pierik, Frank H; Vries, Sanne I de

    2016-05-01

    Physical inactivity among children is on the rise. Active transport to school (ATS), namely walking and cycling there, adds to children's activity level. Little is known about how exposures along actual routes influence children's transport behavior. This study examined how natural and built environments influence mode choice among Dutch children aged 6-11 years. 623 school trips were tracked with global positioning system. Natural and built environmental exposures were determined by means of a geographic information system and their associations with children's active/passive mode choice were analyzed using mixed models. The actual commuted distance is inversely associated with ATS when only personal, traffic safety, and weather features are considered. When the model is adjusted for urban environments, the results are reversed and distance is no longer significant, whereas well-connected streets and cycling lanes are positively associated with ATS. Neither green space nor weather is significant. As distance is not apparent as a constraining travel determinant when moving through urban landscapes, planning authorities should support children's ATS by providing well-designed cities.

  4. Development of kinematic 3D laser scanning system for indoor mapping and as-built BIM using constrained SLAM.

    PubMed

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-10-16

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy.

  5. Development of Kinematic 3D Laser Scanning System for Indoor Mapping and As-Built BIM Using Constrained SLAM

    PubMed Central

    Jung, Jaehoon; Yoon, Sanghyun; Ju, Sungha; Heo, Joon

    2015-01-01

    The growing interest and use of indoor mapping is driving a demand for improved data-acquisition facility, efficiency and productivity in the era of the Building Information Model (BIM). The conventional static laser scanning method suffers from some limitations on its operability in complex indoor environments, due to the presence of occlusions. Full scanning of indoor spaces without loss of information requires that surveyors change the scanner position many times, which incurs extra work for registration of each scanned point cloud. Alternatively, a kinematic 3D laser scanning system, proposed herein, uses line-feature-based Simultaneous Localization and Mapping (SLAM) technique for continuous mapping. Moreover, to reduce the uncertainty of line-feature extraction, we incorporated constrained adjustment based on an assumption made with respect to typical indoor environments: that the main structures are formed of parallel or orthogonal line features. The superiority of the proposed constrained adjustment is its reduction for uncertainties of the adjusted lines, leading to successful data association process. In the present study, kinematic scanning with and without constrained adjustment were comparatively evaluated in two test sites, and the results confirmed the effectiveness of the proposed system. The accuracy of the 3D mapping result was additionally evaluated by comparison with the reference points acquired by a total station: the Euclidean average distance error was 0.034 m for the seminar room and 0.043 m for the corridor, which satisfied the error tolerance for point cloud acquisition (0.051 m) according to the guidelines of the General Services Administration for BIM accuracy. PMID:26501292

  6. Integration of bioinformatics to biodegradation

    PubMed Central

    2014-01-01

    Bioinformatics and biodegradation are two primary scientific fields in applied microbiology and biotechnology. The present review describes development of various bioinformatics tools that may be applied in the field of biodegradation. Several databases, including the University of Minnesota Biocatalysis/Biodegradation database (UM-BBD), a database of biodegradative oxygenases (OxDBase), Biodegradation Network-Molecular Biology Database (Bionemo) MetaCyc, and BioCyc have been developed to enable access to information related to biochemistry and genetics of microbial degradation. In addition, several bioinformatics tools for predicting toxicity and biodegradation of chemicals have been developed. Furthermore, the whole genomes of several potential degrading bacteria have been sequenced and annotated using bioinformatics tools. PMID:24808763

  7. Genome Exploitation and Bioinformatics Tools

    NASA Astrophysics Data System (ADS)

    de Jong, Anne; van Heel, Auke J.; Kuipers, Oscar P.

    Bioinformatic tools can greatly improve the efficiency of bacteriocin screening efforts by limiting the amount of strains. Different classes of bacteriocins can be detected in genomes by looking at different features. Finding small bacteriocins can be especially challenging due to low homology and because small open reading frames (ORFs) are often omitted from annotations. In this chapter, several bioinformatic tools/strategies to identify bacteriocins in genomes are discussed.

  8. Microarray-based bioinformatics analysis of the combined effects of SiNPs and PbAc on cardiovascular system in zebrafish.

    PubMed

    Hu, Hejing; Zhang, Yannan; Shi, Yanfeng; Feng, Lin; Duan, Junchao; Sun, Zhiwei

    2017-10-01

    With rapid development of nanotechnology and growing environmental pollution, the combined toxic effects of SiNPs and pollutants of heavy metals like lead have received global attentions. The aim of this study was to explore the cardiovascular effects of the co-exposure of SiNPs and lead acetate (PbAc) in zebrafish using microarray and bioinformatics analysis. Although there was no other obvious cardiovascular malformation except bleeding phenotype, bradycardia, angiogenesis inhibition and declined cardiac output in zebrafish co-exposed of SiNPs and PbAc at NOAEL level, significant changes were observed in mRNA and microRNA (miRNA) expression patterns. STC-GO analysis indicated that the co-exposure might have more toxic effects on cardiovascular system than that exposure alone. Key differentially expressed genes were discerned out based on the Dynamic-gene-network, including stxbp1a, ndfip2, celf4 and gsk3b. Furthermore, several miRNAs obtained from the miRNA-Gene-Network might play crucial roles in cardiovascular disease, such as dre-miR-93, dre-miR-34a, dre-miR-181c, dre-miR-7145, dre-miR-730, dre-miR-129-5p, dre-miR-19d, dre-miR-218b, dre-miR-221. Besides, the analysis of miRNA-pathway-network indicated that the zebrafish were stimulated by the co-exposure of SiNPs and PbAc, which might cause the disturbance of calcium homeostasis and endoplasmic reticulum stress. As a result, cardiac muscle contraction might be deteriorated. In general, our data provide abundant fundamental research clues to the combined toxicity of environmental pollutants and further in-depth verifications are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Survey of MapReduce frame operation in bioinformatics.

    PubMed

    Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke

    2014-07-01

    Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics.

  10. VLSI Microsystem for Rapid Bioinformatic Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Lue, Jaw-Chyng

    2009-01-01

    A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).

  11. The roots of bioinformatics in theoretical biology.

    PubMed

    Hogeweg, Paulien

    2011-03-01

    From the late 1980s onward, the term "bioinformatics" mostly has been used to refer to computational methods for comparative analysis of genome data. However, the term was originally more widely defined as the study of informatic processes in biotic systems. In this essay, I will trace this early history (from a personal point of view) and I will argue that the original meaning of the term is re-emerging.

  12. ballaxy: web services for structural bioinformatics.

    PubMed

    Hildebrandt, Anna Katharina; Stöckel, Daniel; Fischer, Nina M; de la Garza, Luis; Krüger, Jens; Nickels, Stefan; Röttig, Marc; Schärfe, Charlotta; Schumann, Marcel; Thiel, Philipp; Lenhof, Hans-Peter; Kohlbacher, Oliver; Hildebrandt, Andreas

    2015-01-01

    Web-based workflow systems have gained considerable momentum in sequence-oriented bioinformatics. In structural bioinformatics, however, such systems are still relatively rare; while commercial stand-alone workflow applications are common in the pharmaceutical industry, academic researchers often still rely on command-line scripting to glue individual tools together. In this work, we address the problem of building a web-based system for workflows in structural bioinformatics. For the underlying molecular modelling engine, we opted for the BALL framework because of its extensive and well-tested functionality in the field of structural bioinformatics. The large number of molecular data structures and algorithms implemented in BALL allows for elegant and sophisticated development of new approaches in the field. We hence connected the versatile BALL library and its visualization and editing front end BALLView with the Galaxy workflow framework. The result, which we call ballaxy, enables the user to simply and intuitively create sophisticated pipelines for applications in structure-based computational biology, integrated into a standard tool for molecular modelling.  ballaxy consists of three parts: some minor modifications to the Galaxy system, a collection of tools and an integration into the BALL framework and the BALLView application for molecular modelling. Modifications to Galaxy will be submitted to the Galaxy project, and the BALL and BALLView integrations will be integrated in the next major BALL release. After acceptance of the modifications into the Galaxy project, we will publish all ballaxy tools via the Galaxy toolshed. In the meantime, all three components are available from http://www.ball-project.org/ballaxy. Also, docker images for ballaxy are available at https://registry.hub.docker.com/u/anhi/ballaxy/dockerfile/. ballaxy is licensed under the terms of the GPL. © The Author 2014. Published by Oxford University Press. All rights reserved. For

  13. Bioinformatics resource manager v2.3: an integrated software environment for systems biology with microRNA and cross-species analysis tools

    PubMed Central

    2012-01-01

    Background MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf) results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p<0.05) gene targets in BRM indicates that nicotine exposure disrupts genes involved in neurogenesis, possibly through misregulation of nicotine-sensitive miRNAs. Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis

  14. Adapting bioinformatics curricula for big data

    PubMed Central

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  15. Built to disappear.

    PubMed

    Bauer, Siegfried; Kaltenbrunner, Martin

    2014-06-24

    Microelectronics dominates the technological and commercial landscape of today's electronics industry; ultrahigh density integrated circuits on rigid silicon provide the computing power for smart appliances that help us organize our daily lives. Integrated circuits function flawlessly for decades, yet we like to replace smart phones and tablet computers every year. Disposable electronics, built to disappear in a controlled fashion after the intended lifespan, may be one of the potential applications of transient single-crystalline silicon nanomembranes, reported by Hwang et al. in this issue of ACS Nano. We briefly outline the development of this latest branch of electronics research, and we present some prospects for future developments. Electronics is steadily evolving, and 20 years from now we may find it perfectly normal for smart appliances to be embedded everywhere, on textiles, on our skin, and even in our body.

  16. Hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis method for mid-frequency analysis of built-up systems with epistemic uncertainties

    NASA Astrophysics Data System (ADS)

    Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan

    2017-09-01

    Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.

  17. Evaluation of anaerobic sludge volume for improving azo dye decolorization in a hybrid anaerobic reactor with built-in bioelectrochemical system.

    PubMed

    Cui, Min-Hua; Cui, Dan; Gao, Lei; Wang, Ai-Jie; Cheng, Hao-Yi

    2017-02-01

    A hybrid anaerobic reactor with built-in bioelectrochemical system (BES) has been verified for efficiently treating mixed azo dye wastewater, yet still facing many challenges, such as uncertain reactor construction and insufficient electron donors. In this study, an up-flow hybrid anaerobic reactor with built-in BES was developed for acid orange 7 (AO7) containing wastewater treatment. Cathode and real domestic wastewater both served as electron donor for driving azo dye decolorization. The decolorization efficiency (DE) of AO7 (200 mg/L) in the hybrid reactor was 80.34 ± 2.11% with volume ratio between anaerobic sludge and cathode (VRslu:cat) of 0.5:1 and hydraulic retention time (HRT) of 6 h, which was 15.79% higher than that in BES without sludge zone. DE was improved to 86.02 ± 1.49% with VRslu:cat increased to 1:1. Further increase in the VRslu:cat to 1.5:1 and 2:1, chemical oxygen demand (COD) removal efficiency was continuously improved to 28.78 ± 1.96 and 32.19 ± 0.62%, but there was no obvious DE elevation (slightly increased to 87.62 ± 2.50 and 90.13 ± 3.10%). BES presented efficient electron utilization, the electron usage ratios (EURs) in which fluctuated between 11.02 and 13.06 mol e(-)/mol AO7. It was less than half of that in sludge zone of 24.73-32.06 mol e(-)/mol AO7. The present work optimized the volume ratio between anaerobic sludge and cathode that would be meaningful for the practical application of this hybrid system.

  18. The growing need for microservices in bioinformatics

    PubMed Central

    Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.

    2016-01-01

    Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective

  19. The growing need for microservices in bioinformatics.

    PubMed

    Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J

    2016-01-01

    Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and

  20. Automation of Bioinformatics Workflows using CloVR, a Cloud Virtual Resource

    PubMed Central

    Vangala, Mahesh

    2013-01-01

    Exponential growth of biological data, mainly due to revolutionary developments in NGS technologies in past couple of years, created a multitude of challenges in downstream data analysis using bioinformatics approaches. To handle such tsunami of data, bioinformatics analysis must be carried out in an automated and parallel fashion. A successful analysis often requires more than a few computational steps and bootstrapping these individual steps (scripts) into components and the components into pipelines certainly makes bioinformatics a reproducible and manageable segment of scientific research. CloVR (http://clovr.org) is one such flexible framework that facilitates the abstraction of bioinformatics workflows into executable pipelines. CloVR comes packaged with various built-in bioinformatics pipelines that can make use of multicore processing power when run on servers and/or cloud. CloVR is amenable to build custom pipelines based on individual laboratory requirements. CloVR is available as a single executable virtual image file that comes bundled with pre-installed and pre-configured bioinformatics tools and packages and thus circumvents the cumbersome installation difficulties. CloVR is highly portable and can be run on traditional desktop/laptop computers, central servers and cloud compute farms. In conclusion, CloVR provides built-in automated analysis pipelines for microbial genomics with a scope to develop and integrate custom-workflows that make use of parallel processing power when run on compute clusters, there by addressing the bioinformatics challenges with NGS data.

  1. Generalized Centroid Estimators in Bioinformatics

    PubMed Central

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  2. A reliable, low-cost picture archiving and communications system for small and medium veterinary practices built using open-source technology.

    PubMed

    Iotti, Bryan; Valazza, Alberto

    2014-10-01

    Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades.

  3. The potential of translational bioinformatics approaches for pharmacology research.

    PubMed

    Li, Lang

    2015-10-01

    The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of 'omics' to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines. © 2015 The British Pharmacological Society.

  4. The potential of translational bioinformatics approaches for pharmacology research

    PubMed Central

    Li, Lang

    2015-01-01

    The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of ‘omics’ to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines. PMID:25753093

  5. Bioinformatics: perspectives for the future.

    PubMed

    Costa, Luciano da Fontoura

    2004-12-30

    I give here a very personal perspective of Bioinformatics and its future, starting by discussing the origin of the term (and area) of bioinformatics and proceeding by trying to foresee the development of related issues, including pattern recognition/data mining, the need to reintegrate biology, the potential of complex networks as a powerful and flexible framework for bioinformatics and the interplay between bio- and neuroinformatics. Human resource formation and market perspective are also addressed. Given the complexity and vastness of these issues and concepts, as well as the limited size of a scientific article and finite patience of the reader, these perspectives are surely incomplete and biased. However, it is expected that some of the questions and trends that are identified will motivate discussions during the IcoBiCoBi round table (with the same name as this article) and perhaps provide a more ample perspective among the participants of that conference and the readers of this text.

  6. Bioinformatics/biostatistics: microarray analysis.

    PubMed

    Eichler, Gabriel S

    2012-01-01

    The quantity and complexity of the molecular-level data generated in both research and clinical settings require the use of sophisticated, powerful computational interpretation techniques. It is for this reason that bioinformatic analysis of complex molecular profiling data has become a fundamental technology in the development of personalized medicine. This chapter provides a high-level overview of the field of bioinformatics and outlines several, classic bioinformatic approaches. The highlighted approaches can be aptly applied to nearly any sort of high-dimensional genomic, proteomic, or metabolomic experiments. Reviewed technologies in this chapter include traditional clustering analysis, the Gene Expression Dynamics Inspector (GEDI), GoMiner (GoMiner), Gene Set Enrichment Analysis (GSEA), and the Learner of Functional Enrichment (LeFE).

  7. Generations of interdisciplinarity in bioinformatics

    PubMed Central

    Bartlett, Andrew; Lewis, Jamie; Williams, Matthew L.

    2016-01-01

    Bioinformatics, a specialism propelled into relevance by the Human Genome Project and the subsequent -omic turn in the life science, is an interdisciplinary field of research. Qualitative work on the disciplinary identities of bioinformaticians has revealed the tensions involved in work in this “borderland.” As part of our ongoing work on the emergence of bioinformatics, between 2010 and 2011, we conducted a survey of United Kingdom-based academic bioinformaticians. Building on insights drawn from our fieldwork over the past decade, we present results from this survey relevant to a discussion of disciplinary generation and stabilization. Not only is there evidence of an attitudinal divide between the different disciplinary cultures that make up bioinformatics, but there are distinctions between the forerunners, founders and the followers; as inter/disciplines mature, they face challenges that are both inter-disciplinary and inter-generational in nature. PMID:27453689

  8. Generations of interdisciplinarity in bioinformatics.

    PubMed

    Bartlett, Andrew; Lewis, Jamie; Williams, Matthew L

    2016-04-02

    Bioinformatics, a specialism propelled into relevance by the Human Genome Project and the subsequent -omic turn in the life science, is an interdisciplinary field of research. Qualitative work on the disciplinary identities of bioinformaticians has revealed the tensions involved in work in this "borderland." As part of our ongoing work on the emergence of bioinformatics, between 2010 and 2011, we conducted a survey of United Kingdom-based academic bioinformaticians. Building on insights drawn from our fieldwork over the past decade, we present results from this survey relevant to a discussion of disciplinary generation and stabilization. Not only is there evidence of an attitudinal divide between the different disciplinary cultures that make up bioinformatics, but there are distinctions between the forerunners, founders and the followers; as inter/disciplines mature, they face challenges that are both inter-disciplinary and inter-generational in nature.

  9. Chapter 16: text mining for translational bioinformatics.

    PubMed

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  10. Bioinformatic pipelines in Python with Leaf.

    PubMed

    Napolitano, Francesco; Mariani-Costantini, Renato; Tagliaferri, Roberto

    2013-06-21

    An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user's Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools.

  11. Bioinformatic pipelines in Python with Leaf

    PubMed Central

    2013-01-01

    Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum overhead for the programmer, thus providing a simple layer of software structuring. Results Leaf includes a formal language for the definition of pipelines with code that can be transparently inserted into the user’s Python code. Its syntax is designed to visually highlight dependencies in the pipeline structure it defines. While encouraging the developer to think in terms of bioinformatic pipelines, Leaf supports a number of automated features including data and session persistence, consistency checks between steps of the analysis, processing optimization and publication of the analytic protocol in the form of a hypertext. Conclusions Leaf offers a powerful balance between plan-driven and change-driven development environments in the design, management and communication of bioinformatic pipelines. Its unique features make it a valuable alternative to other related tools. PMID:23786315

  12. Bioinformatics of cardiovascular miRNA biology.

    PubMed

    Kunz, Meik; Xiao, Ke; Liang, Chunguang; Viereck, Janika; Pachel, Christina; Frantz, Stefan; Thum, Thomas; Dandekar, Thomas

    2015-12-01

    MicroRNAs (miRNAs) are small ~22 nucleotide non-coding RNAs and are highly conserved among species. Moreover, miRNAs regulate gene expression of a large number of genes associated with important biological functions and signaling pathways. Recently, several miRNAs have been found to be associated with cardiovascular diseases. Thus, investigating the complex regulatory effect of miRNAs may lead to a better understanding of their functional role in the heart. To achieve this, bioinformatics approaches have to be coupled with validation and screening experiments to understand the complex interactions of miRNAs with the genome. This will boost the subsequent development of diagnostic markers and our understanding of the physiological and therapeutic role of miRNAs in cardiac remodeling. In this review, we focus on and explain different bioinformatics strategies and algorithms for the identification and analysis of miRNAs and their regulatory elements to better understand cardiac miRNA biology. Starting with the biogenesis of miRNAs, we present approaches such as LocARNA and miRBase for combining sequence and structure analysis including phylogenetic comparisons as well as detailed analysis of RNA folding patterns, functional target prediction, signaling pathway as well as functional analysis. We also show how far bioinformatics helps to tackle the unprecedented level of complexity and systemic effects by miRNA, underlining the strong therapeutic potential of miRNA and miRNA target structures in cardiovascular disease. In addition, we discuss drawbacks and limitations of bioinformatics algorithms and the necessity of experimental approaches for miRNA target identification. This article is part of a Special Issue entitled 'Non-coding RNAs'.

  13. Bioinformatics: indispensable, yet hidden in plain sight?

    PubMed

    Bartlett, Andrew; Penders, Bart; Lewis, Jamie

    2017-06-21

    Bioinformatics has multitudinous identities, organisational alignments and disciplinary links. This variety allows bioinformaticians and bioinformatic work to contribute to much (if not most) of life science research in profound ways. The multitude of bioinformatic work also translates into a multitude of credit-distribution arrangements, apparently dismissing that work. We report on the epistemic and social arrangements that characterise the relationship between bioinformatics and life science. We describe, in sociological terms, the character, power and future of bioinformatic work. The character of bioinformatic work is such that its cultural, institutional and technical structures allow for it to be black-boxed easily. The result is that bioinformatic expertise and contributions travel easily and quickly, yet remain largely uncredited. The power of bioinformatic work is shaped by its dependency on life science work, which combined with the black-boxed character of bioinformatic expertise further contributes to situating bioinformatics on the periphery of the life sciences. Finally, the imagined futures of bioinformatic work suggest that bioinformatics will become ever more indispensable without necessarily becoming more visible, forcing bioinformaticians into difficult professional and career choices. Bioinformatic expertise and labour is epistemically central but often institutionally peripheral. In part, this is a result of the ways in which the character, power distribution and potential futures of bioinformatics are constituted. However, alternative paths can be imagined.

  14. Mathematics and evolutionary biology make bioinformatics education comprehensible.

    PubMed

    Jungck, John R; Weisstein, Anton E

    2013-09-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.

  15. Mathematics and evolutionary biology make bioinformatics education comprehensible

    PubMed Central

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  16. Visualising "Junk" DNA through Bioinformatics

    ERIC Educational Resources Information Center

    Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia

    2005-01-01

    One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…

  17. Reproducible Bioinformatics Research for Biologists

    USDA-ARS?s Scientific Manuscript database

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  18. Bioinformatics and the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  19. Bioinformatics and the Undergraduate Curriculum

    ERIC Educational Resources Information Center

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  20. Visualising "Junk" DNA through Bioinformatics

    ERIC Educational Resources Information Center

    Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia

    2005-01-01

    One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…

  1. Effect of electrode position on azo dye removal in an up-flow hybrid anaerobic digestion reactor with built-in bioelectrochemical system

    NASA Astrophysics Data System (ADS)

    Cui, Min-Hua; Cui, Dan; Lee, Hyung-Sool; Liang, Bin; Wang, Ai-Jie; Cheng, Hao-Yi

    2016-04-01

    In this study, two modes of hybrid anaerobic digestion (AD) bioreactor with built-in BESs (electrodes installed in liquid phase (R1) and sludge phase (R2)) were tested for identifying the effect of electrodes position on azo dye wastewater treatment. Alizarin yellow R (AYR) was used as a model dye. Decolorization efficiency of R1 was 90.41 ± 6.20% at influent loading rate of 800 g-AYR/ m3·d, which was 39% higher than that of R2. The contribution of bioelectrochemical reduction to AYR decolorization (16.23 ± 1.86% for R1 versus 22.24 ± 2.14% for R2) implied that although azo dye was mainly removed in sludge zone, BES further improved the effluent quality, especially for R1 where electrodes were installed in liquid phase. The microbial communities in the electrode biofilms (dominant by Enterobacter) and sludge (dominant by Enterococcus) were well distinguished in R1, but they were similar in R2. These results suggest that electrodes installed in liquid phase in the anaerobic hybrid system are more efficient than that in sludge phase for azo dye removal, which give great inspirations for the application of AD-BES hybrid process for various refractory wastewaters treatment.

  2. Effect of electrode position on azo dye removal in an up-flow hybrid anaerobic digestion reactor with built-in bioelectrochemical system

    PubMed Central

    Cui, Min-Hua; Cui, Dan; Lee, Hyung-Sool; Liang, Bin; Wang, Ai-Jie; Cheng, Hao-Yi

    2016-01-01

    In this study, two modes of hybrid anaerobic digestion (AD) bioreactor with built-in BESs (electrodes installed in liquid phase (R1) and sludge phase (R2)) were tested for identifying the effect of electrodes position on azo dye wastewater treatment. Alizarin yellow R (AYR) was used as a model dye. Decolorization efficiency of R1 was 90.41 ± 6.20% at influent loading rate of 800 g-AYR/ m3·d, which was 39% higher than that of R2. The contribution of bioelectrochemical reduction to AYR decolorization (16.23 ± 1.86% for R1 versus 22.24 ± 2.14% for R2) implied that although azo dye was mainly removed in sludge zone, BES further improved the effluent quality, especially for R1 where electrodes were installed in liquid phase. The microbial communities in the electrode biofilms (dominant by Enterobacter) and sludge (dominant by Enterococcus) were well distinguished in R1, but they were similar in R2. These results suggest that electrodes installed in liquid phase in the anaerobic hybrid system are more efficient than that in sludge phase for azo dye removal, which give great inspirations for the application of AD-BES hybrid process for various refractory wastewaters treatment. PMID:27121278

  3. Coupling of a distributed stakeholder-built system dynamics socio-economic model with SAHYSMOD for sustainable soil salinity management - Part 1: Model development

    NASA Astrophysics Data System (ADS)

    Inam, Azhar; Adamowski, Jan; Prasher, Shiv; Halbe, Johannes; Malard, Julien; Albano, Raffaele

    2017-08-01

    Effective policies, leading to sustainable management solutions for land and water resources, require a full understanding of interactions between socio-economic and physical processes. However, the complex nature of these interactions, combined with limited stakeholder engagement, hinders the incorporation of socio-economic components into physical models. The present study addresses this challenge by integrating the physical Spatial Agro Hydro Salinity Model (SAHYSMOD) with a participatory group-built system dynamics model (GBSDM) that includes socio-economic factors. A stepwise process to quantify the GBSDM is presented, along with governing equations and model assumptions. Sub-modules of the GBSDM, describing agricultural, economic, water and farm management factors, are linked together with feedbacks and finally coupled with the physically based SAHYSMOD model through commonly used tools (i.e., MS Excel and a Python script). The overall integrated model (GBSDM-SAHYSMOD) can be used to help facilitate the role of stakeholders with limited expertise and resources in model and policy development and implementation. Following the development of the integrated model, a testing methodology was used to validate the structure and behavior of the integrated model. Model robustness under different operating conditions was also assessed. The model structure was able to produce anticipated real behaviours under the tested scenarios, from which it can be concluded that the formulated structures generate the right behaviour for the right reasons.

  4. Implementing bioinformatic workflows within the bioextract server.

    PubMed

    Lushbough, Carol M; Bergman, Michael K; Lawrence, Carolyn J; Jennewein, Doug; Brendel, Volker

    2008-01-01

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed service designed to provide researchers with the web ability to query multiple data sources, save results as searchable data sets, and execute analytic tools. As the researcher works with the system, their tasks are saved in the background. At any time these steps can be saved as a workflow that can then be executed again and/or modified later.

  5. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  6. Mobyle: a new full web bioinformatics framework

    PubMed Central

    Néron, Bertrand; Ménager, Hervé; Maufrais, Corinne; Joly, Nicolas; Maupetit, Julien; Letort, Sébastien; Carrere, Sébastien; Tuffery, Pierre; Letondal, Catherine

    2009-01-01

    Motivation: For the biologist, running bioinformatics analyses involves a time-consuming management of data and tools. Users need support to organize their work, retrieve parameters and reproduce their analyses. They also need to be able to combine their analytic tools using a safe data flow software mechanism. Finally, given that scientific tools can be difficult to install, it is particularly helpful for biologists to be able to use these tools through a web user interface. However, providing a web interface for a set of tools raises the problem that a single web portal cannot offer all the existing and possible services: it is the user, again, who has to cope with data copy among a number of different services. A framework enabling portal administrators to build a network of cooperating services would therefore clearly be beneficial. Results: We have designed a system, Mobyle, to provide a flexible and usable Web environment for defining and running bioinformatics analyses. It embeds simple yet powerful data management features that allow the user to reproduce analyses and to combine tools using a hierarchical typing system. Mobyle offers invocation of services distributed over remote Mobyle servers, thus enabling a federated network of curated bioinformatics portals without the user having to learn complex concepts or to install sophisticated software. While being focused on the end user, the Mobyle system also addresses the need, for the bioinfomatician, to automate remote services execution: PlayMOBY is a companion tool that automates the publication of BioMOBY web services, using Mobyle program definitions. Availability: The Mobyle system is distributed under the terms of the GNU GPLv2 on the project web site (http://bioweb2.pasteur.fr/projects/mobyle/). It is already deployed on three servers: http://mobyle.pasteur.fr, http://mobyle.rpbs.univ-paris-diderot.fr and http://lipm-bioinfo.toulouse.inra.fr/Mobyle. The PlayMOBY companion is distributed under the

  7. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    NASA Technical Reports Server (NTRS)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  8. Developing library bioinformatics services in context: the Purdue University Libraries bioinformationist program

    PubMed Central

    Rein, Diane C.

    2006-01-01

    Setting: Purdue University is a major agricultural, engineering, biomedical, and applied life science research institution with an increasing focus on bioinformatics research that spans multiple disciplines and campus academic units. The Purdue University Libraries (PUL) hired a molecular biosciences specialist to discover, engage, and support bioinformatics needs across the campus. Program Components: After an extended period of information needs assessment and environmental scanning, the specialist developed a week of focused bioinformatics instruction (Bioinformatics Week) to launch system-wide, library-based bioinformatics services. Evaluation Mechanisms: The specialist employed a two-tiered approach to assess user information requirements and expectations. The first phase involved careful observation and collection of information needs in-context throughout the campus, attending laboratory meetings, interviewing department chairs and individual researchers, and engaging in strategic planning efforts. Based on the information gathered during the integration phase, several survey instruments were developed to facilitate more critical user assessment and the recovery of quantifiable data prior to planning. Next Steps/Future Directions: Given information gathered while working with clients and through formal needs assessments, as well as the success of instructional approaches used in Bioinformatics Week, the specialist is developing bioinformatics support services for the Purdue community. The specialist is also engaged in training PUL faculty librarians in bioinformatics to provide a sustaining culture of library-based bioinformatics support and understanding of Purdue's bioinformatics-related decision and policy making. PMID:16888666

  9. Developing library bioinformatics services in context: the Purdue University Libraries bioinformationist program.

    PubMed

    Rein, Diane C

    2006-07-01

    Purdue University is a major agricultural, engineering, biomedical, and applied life science research institution with an increasing focus on bioinformatics research that spans multiple disciplines and campus academic units. The Purdue University Libraries (PUL) hired a molecular biosciences specialist to discover, engage, and support bioinformatics needs across the campus. After an extended period of information needs assessment and environmental scanning, the specialist developed a week of focused bioinformatics instruction (Bioinformatics Week) to launch system-wide, library-based bioinformatics services. The specialist employed a two-tiered approach to assess user information requirements and expectations. The first phase involved careful observation and collection of information needs in-context throughout the campus, attending laboratory meetings, interviewing department chairs and individual researchers, and engaging in strategic planning efforts. Based on the information gathered during the integration phase, several survey instruments were developed to facilitate more critical user assessment and the recovery of quantifiable data prior to planning. Given information gathered while working with clients and through formal needs assessments, as well as the success of instructional approaches used in Bioinformatics Week, the specialist is developing bioinformatics support services for the Purdue community. The specialist is also engaged in training PUL faculty librarians in bioinformatics to provide a sustaining culture of library-based bioinformatics support and understanding of Purdue's bioinformatics-related decision and policy making.

  10. Bioinformatics in the information age

    SciTech Connect

    Spengler, Sylvia J.

    2000-02-01

    There is a well-known story about the blind man examining the elephant: the part of the elephant examined determines his perception of the whole beast. Perhaps bioinformatics--the shotgun marriage between biology and mathematics, computer science, and engineering--is like an elephant that occupies a large chair in the scientific living room. Given the demand for and shortage of researchers with the computer skills to handle large volumes of biological data, where exactly does the bioinformatics elephant sit? There are probably many biologists who feel that a major product of this bioinformatics elephant is large piles of waste material. If you have tried to plow through Web sites and software packages in search of a specific tool for analyzing and collating large amounts of research data, you may well feel the same way. But there has been progress with major initiatives to develop more computing power, educate biologists about computers, increase funding, and set standards. For our purposes, bioinformatics is not simply a biologically inclined rehash of information theory (1) nor is it a hodgepodge of computer science techniques for building, updating, and accessing biological data. Rather bioinformatics incorporates both of these capabilities into a broad interdisciplinary science that involves both conceptual and practical tools for the understanding, generation, processing, and propagation of biological information. As such, bioinformatics is the sine qua non of 21st-century biology. Analyzing gene expression using cDNA microarrays immobilized on slides or other solid supports (gene chips) is set to revolutionize biology and medicine and, in so doing, generate vast quantities of data that have to be accurately interpreted (Fig. 1). As discussed at a meeting a few months ago (Microarray Algorithms and Statistical Analysis: Methods and Standards; Tahoe City, California; 9-12 November 1999), experiments with cDNA arrays must be subjected to quality control

  11. Coupling of a distributed stakeholder-built system dynamics socio-economic model with SAHYSMOD for sustainable soil salinity management. Part 2: Model coupling and application

    NASA Astrophysics Data System (ADS)

    Inam, Azhar; Adamowski, Jan; Prasher, Shiv; Halbe, Johannes; Malard, Julien; Albano, Raffaele

    2017-08-01

    Many simulation models focus on simulating a single physical process and do not constitute balanced representations of the physical, social and economic components of a system. The present study addresses this challenge by integrating a physical (P) model (SAHYSMOD) with a group (stakeholder) built system dynamics model (GBSDM) through a component modeling approach based on widely applied tools such as MS Excel, Python and Visual Basic for Applications (VBA). The coupled model (P-GBSDM) was applied to test soil salinity management scenarios (proposed by stakeholders) for the Haveli region of the Rechna Doab Basin in Pakistan. Scenarios such as water banking, vertical drainage, canal lining, and irrigation water reallocation were simulated with the integrated model. Spatiotemporal maps and economic and environmental trade-off criteria were used to examine the effectiveness of the selected management scenarios. After 20 years of simulation, canal lining reduced soil salinity by 22% but caused an initial reduction of 18% in farm income, which requires an initial investment from the government. The government-sponsored Salinity Control and Reclamation Project (SCARP) is a short-term policy that resulted in a 37% increase in water availability with a 12% increase in farmer income. However, it showed detrimental effects on soil salinity in the long term, with a 21% increase in soil salinity due to secondary salinization. The new P-GBSDM was shown to be an effective platform for engaging stakeholders and simulating their proposed management policies while taking into account socioeconomic considerations. This was not possible using the physically based SAHYSMOD model alone.

  12. Big data bioinformatics.

    PubMed

    Greene, Casey S; Tan, Jie; Ung, Matthew; Moore, Jason H; Cheng, Chao

    2014-12-01

    Recent technological advances allow for high throughput profiling of biological systems in a cost-efficient manner. The low cost of data generation is leading us to the "big data" era. The availability of big data provides unprecedented opportunities but also raises new challenges for data mining and analysis. In this review, we introduce key concepts in the analysis of big data, including both "machine learning" algorithms as well as "unsupervised" and "supervised" examples of each. We note packages for the R programming language that are available to perform machine learning analyses. In addition to programming based solutions, we review webservers that allow users with limited or no programming background to perform these analyses on large data compendia.

  13. Impacts of small built infrastructure in inland valleys in Burkina Faso and Mali: Rationale for a systems approach that thinks beyond rice?

    NASA Astrophysics Data System (ADS)

    Katic, Pamela; Lautze, Jonathan; Namara, Regassa E.

    The potential to increase agricultural production in inland valleys in West Africa has received a good degree of attention in both national development strategies and academic literature, and improving agriculture productivity in inland valleys has been an active area of donor engagement. Despite this attention, documentation of the degree to which benefits are enhanced through construction of built water storage infrastructure in such sites is somewhat scant. This paper examines evidence from eight inland valley sites with recently-built water retention infrastructure (4 in southwest Burkina Faso, 4 in southeast Mali) to determine how economic returns derived from agricultural production have changed through built infrastructure construction. Farmer interviews were undertaken at each site to identify costs and benefits of agricultural production before and after small built infrastructure construction. Overall results indicate that net present value increased substantially after built infrastructure was constructed. The results nonetheless highlight substantial variation in economic impacts across sites. A central variable explaining such variation appears to be the degree to which water retention is exploited for groundwater-based offseason cultivation. These findings will help development planners to better predict the degree and nature of change engendered by water storage projects in inland valley sites, and help to ground-truth grand statements about the development potential of this piece of natural infrastructure.

  14. Omics technologies, data and bioinformatics principles.

    PubMed

    Schneider, Maria V; Orchard, Sandra

    2011-01-01

    We provide an overview on the state of the art for the Omics technologies, the types of omics data and the bioinformatics resources relevant and related to Omics. We also illustrate the bioinformatics challenges of dealing with high-throughput data. This overview touches several fundamental aspects of Omics and bioinformatics: data standardisation, data sharing, storing Omics data appropriately and exploring Omics data in bioinformatics. Though the principles and concepts presented are true for the various different technological fields, we concentrate in three main Omics fields namely: genomics, transcriptomics and proteomics. Finally we address the integration of Omics data, and provide several useful links for bioinformatics and Omics.

  15. Nuclear reactors built, being built, or planned 1993

    SciTech Connect

    Not Available

    1993-08-01

    Nuclear Reactors Built, Being Built, or Planned contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1993. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: civilian, production, military, export, and critical assembly.

  16. Tools and collaborative environments for bioinformatics research

    PubMed Central

    Giugno, Rosalba; Pulvirenti, Alfredo

    2011-01-01

    Advanced research requires intensive interaction among a multitude of actors, often possessing different expertise and usually working at a distance from each other. The field of collaborative research aims to establish suitable models and technologies to properly support these interactions. In this article, we first present the reasons for an interest of Bioinformatics in this context by also suggesting some research domains that could benefit from collaborative research. We then review the principles and some of the most relevant applications of social networking, with a special attention to networks supporting scientific collaboration, by also highlighting some critical issues, such as identification of users and standardization of formats. We then introduce some systems for collaborative document creation, including wiki systems and tools for ontology development, and review some of the most interesting biological wikis. We also review the principles of Collaborative Development Environments for software and show some examples in Bioinformatics. Finally, we present the principles and some examples of Learning Management Systems. In conclusion, we try to devise some of the goals to be achieved in the short term for the exploitation of these technologies. PMID:21984743

  17. Using Bioinformatics Approach to Explore the Pharmacological Mechanisms of Multiple Ingredients in Shuang-Huang-Lian

    PubMed Central

    Zhang, Bai-xia; Li, Jian; Gu, Hao; Li, Qiang; Zhang, Qi; Zhang, Tian-jiao; Wang, Yun; Cai, Cheng-ke

    2015-01-01

    Due to the proved clinical efficacy, Shuang-Huang-Lian (SHL) has developed a variety of dosage forms. However, the in-depth research on targets and pharmacological mechanisms of SHL preparations was scarce. In the presented study, the bioinformatics approaches were adopted to integrate relevant data and biological information. As a result, a PPI network was built and the common topological parameters were characterized. The results suggested that the PPI network of SHL exhibited a scale-free property and modular architecture. The drug target network of SHL was structured with 21 functional modules. According to certain modules and pharmacological effects distribution, an antitumor effect and potential drug targets were predicted. A biological network which contained 26 subnetworks was constructed to elucidate the antipneumonia mechanism of SHL. We also extracted the subnetwork to explicitly display the pathway where one effective component acts on the pneumonia related targets. In conclusions, a bioinformatics approach was established for exploring the drug targets, pharmacological activity distribution, effective components of SHL, and its mechanism of antipneumonia. Above all, we identified the effective components and disclosed the mechanism of SHL from the view of system. PMID:26495421

  18. Protein bioinformatics applied to virology.

    PubMed

    Mohabatkar, Hassan; Keyhanfar, Mehrnaz; Behbahani, Mandana

    2012-09-01

    Scientists have united in a common search to sequence, store and analyze genes and proteins. In this regard, rapidly evolving bioinformatics methods are providing valuable information on these newly-discovered molecules. Understanding what has been done and what we can do in silico is essential in designing new experiments. The unbalanced situation between sequence-known proteins and attribute-known proteins, has called for developing computational methods or high-throughput automated tools for fast and reliably predicting or identifying various characteristics of uncharacterized proteins. Taking into consideration the role of viruses in causing diseases and their use in biotechnology, the present review describes the application of protein bioinformatics in virology. Therefore, a number of important features of viral proteins like epitope prediction, protein docking, subcellular localization, viral protease cleavage sites and computer based comparison of their aspects have been discussed. This paper also describes several tools, principally developed for viral bioinformatics. Prediction of viral protein features and learning the advances in this field can help basic understanding of the relationship between a virus and its host.

  19. Bioinformatic identification of plant peptides.

    PubMed

    Lease, Kevin A; Walker, John C

    2010-01-01

    Plant peptides play a number of important roles in defence, development and many other aspects of plant physiology. Identifying additional peptide sequences provides the starting point to investigate their function using molecular, genetic or biochemical techniques. Due to their small size, identifying peptide sequences may not succeed using the default bioinformatic approaches that work well for average-sized proteins. There are two general scenarios related to bioinformatic identification of peptides to be discussed in this paper. In the first scenario, one already has the sequence of a plant peptide and is trying to find more plant peptides with some sequence similarity to the starting peptide. To do this, the Basic Local Alignment Search Tool (BLAST) is employed, with the parameters adjusted to be more favourable for identifying potential peptide matches. A second scenario involves trying to identify plant peptides without using sequence similarity searches to known plant peptides. In this approach, features such as protein size and the presence of a cleavable amino-terminal signal peptide are used to screen annotated proteins. A variation of this method can be used to screen for unannotated peptides from genomic sequences. Bioinformatic resources related to Arabidopsis thaliana will be used to illustrate these approaches.

  20. SU-F-P-35: A Multi-Institutional Plan Quality Checking Tool Built On Oncospace: A Shared Radiation Oncology Database System

    SciTech Connect

    Bowers, M; Robertson, S; Moore, J; Wong, J; Phillips, M; Hendrickson, K; Evans, K; McNutt, T

    2016-06-15

    Purpose: Late toxicity from radiation to critical structures limits the possible dose in Radiation Therapy. Perfectly conformal treatment of a target is not realizable, so the clinician must accept a certain level of collateral radiation to nearby OARs. But how much? General guidelines exist for healthy tissue sparing which guide RT treatment planning, but are these guidelines good enough to create the optimal plan given the individualized patient anatomy? We propose a means to evaluate the planned dose level to an OAR using a multi-institutional data-store of previously treated patients, so a clinician might reconsider planning objectives. Methods: The tool is built on Oncospace, a federated data-store system, which consists of planning data import, web based analysis tools, and a database containing:1) DVHs: dose by percent volume delivered to each ROI for each patient previously treated and included in the database.2) Overlap Volume Histograms (OVHs): Anatomical measure defined as the percent volume of an ROI within a given distance to target structures.Clinicians know what OARs are important to spare. For any ROI, Oncospace knows for which patients’ anatomy that ROI was harder to plan in the past (the OVH is less). The planned dose should be close to the least dose of previous patients. The tool displays the dose those OARs were subjected to, and the clinician can make a determination about the planning objectives used.Multiple institutions contribute to the Oncospace Consortium, and their DVH and OVH data are combined and color coded in the output. Results: The Oncospace website provides a plan quality display tool which identifies harder to treat patients, and graphically displays the dose delivered to them for comparison with the proposed plan. Conclusion: The Oncospace Consortium manages a data-store of previously treated patients which can be used for quality checking new plans. Grant funding by Elekta.

  1. ExPASy: SIB bioinformatics resource portal.

    PubMed

    Artimo, Panu; Jonnalagedda, Manohar; Arnold, Konstantin; Baratin, Delphine; Csardi, Gabor; de Castro, Edouard; Duvaud, Séverine; Flegel, Volker; Fortier, Arnaud; Gasteiger, Elisabeth; Grosdidier, Aurélien; Hernandez, Céline; Ioannidis, Vassilios; Kuznetsov, Dmitry; Liechti, Robin; Moretti, Sébastien; Mostaguir, Khaled; Redaschi, Nicole; Rossier, Grégoire; Xenarios, Ioannis; Stockinger, Heinz

    2012-07-01

    ExPASy (http://www.expasy.org) has worldwide reputation as one of the main bioinformatics resources for proteomics. It has now evolved, becoming an extensible and integrative portal accessing many scientific resources, databases and software tools in different areas of life sciences. Scientists can henceforth access seamlessly a wide range of resources in many different domains, such as proteomics, genomics, phylogeny/evolution, systems biology, population genetics, transcriptomics, etc. The individual resources (databases, web-based and downloadable software tools) are hosted in a 'decentralized' way by different groups of the SIB Swiss Institute of Bioinformatics and partner institutions. Specifically, a single web portal provides a common entry point to a wide range of resources developed and operated by different SIB groups and external institutions. The portal features a search function across 'selected' resources. Additionally, the availability and usage of resources are monitored. The portal is aimed for both expert users and people who are not familiar with a specific domain in life sciences. The new web interface provides, in particular, visual guidance for newcomers to ExPASy.

  2. Bioinformatics Training Network (BTN): a community resource for bioinformatics trainers.

    PubMed

    Schneider, Maria V; Walter, Peter; Blatter, Marie-Claude; Watson, James; Brazas, Michelle D; Rother, Kristian; Budd, Aidan; Via, Allegra; van Gelder, Celia W G; Jacob, Joachim; Fernandes, Pedro; Nyrönen, Tommi H; De Las Rivas, Javier; Blicher, Thomas; Jimenez, Rafael C; Loveland, Jane; McDowall, Jennifer; Jones, Phil; Vaughan, Brendan W; Lopez, Rodrigo; Attwood, Teresa K; Brooksbank, Catherine

    2012-05-01

    Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response to the development of 'high-throughput biology', the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials. Conversely, there is much relevant teaching material and training expertise available worldwide that, were it properly organized, could be exploited by anyone who needs to provide training or needs to set up a new course. To do this, however, the materials would have to be centralized in a database and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review it, respectively, to similar initiatives and collections.

  3. Cellular automata and its applications in protein bioinformatics.

    PubMed

    Xiao, Xuan; Wang, Pu; Chou, Kuo-Chen

    2011-09-01

    With the explosion of protein sequences generated in the postgenomic era, it is highly desirable to develop high-throughput tools for rapidly and reliably identifying various attributes of uncharacterized proteins based on their sequence information alone. The knowledge thus obtained can help us timely utilize these newly found protein sequences for both basic research and drug discovery. Many bioinformatics tools have been developed by means of machine learning methods. This review is focused on the applications of a new kind of science (cellular automata) in protein bioinformatics. A cellular automaton (CA) is an open, flexible and discrete dynamic model that holds enormous potentials in modeling complex systems, in spite of the simplicity of the model itself. Researchers, scientists and practitioners from different fields have utilized cellular automata for visualizing protein sequences, investigating their evolution processes, and predicting their various attributes. Owing to its impressive power, intuitiveness and relative simplicity, the CA approach has great potential for use as a tool for bioinformatics.

  4. Bioinformatic approaches to identifying and classifying Rab proteins.

    PubMed

    Diekmann, Yoan; Pereira-Leal, José B

    2015-01-01

    The bioinformatic annotation of Rab GTPases is important, for example, to understand the evolution of the endomembrane system. However, Rabs are particularly challenging for standard annotation pipelines because they are similar to other small GTPases and form a large family with many paralogous subfamilies. Here, we describe a bioinformatic annotation pipeline specifically tailored to Rab GTPases. It proceeds in two steps: first, Rabs are distinguished from other proteins based on GTPase-specific motifs, overall sequence similarity to other Rabs, and the occurrence of Rab-specific motifs. Second, Rabs are classified taking either a more accurate but slower phylogenetic approach or a slightly less accurate but much faster bioinformatic approach. All necessary steps can either be performed locally or using the referenced online tools. An implementation of a slightly more involved version of the pipeline presented here is available at RabDB.org.

  5. An approach to regional wetland digital elevation model development using a differential global positioning system and a custom-built helicopter-based surveying system

    USGS Publications Warehouse

    Jones, J.W.; Desmond, G.B.; Henkle, C.; Glover, R.

    2012-01-01

    Accurate topographic data are critical to restoration science and planning for the Everglades region of South Florida, USA. They are needed to monitor and simulate water level, water depth and hydroperiod and are used in scientific research on hydrologic and biologic processes. Because large wetland environments and data acquisition challenge conventional ground-based and remotely sensed data collection methods, the United States Geological Survey (USGS) adapted a classical data collection instrument to global positioning system (GPS) and geographic information system (GIS) technologies. Data acquired with this instrument were processed using geostatistics to yield sub-water level elevation values with centimetre accuracy (??15 cm). The developed database framework, modelling philosophy and metadata protocol allow for continued, collaborative model revision and expansion, given additional elevation or other ancillary data. ?? 2012 Taylor & Francis.

  6. Fuento: functional enrichment for bioinformatics.

    PubMed

    Weichselbaum, David; Zagrovic, Bojan; Polyansky, Anton A

    2017-08-15

    The currently available functional enrichment software focuses mostly on gene expression analysis, whereby server- and graphical-user-interface-based tools with specific scope dominate the field. Here we present an efficient, user-friendly, multifunctional command-line-based functional enrichment tool (fu-en-to), tailored for the bioinformatics researcher. Source code and binaries freely available for download at github.com/DavidWeichselbaum/fuento, implemented in C ++ and supported on Linux and OS X. newant@gmail.com or bojan.zagrovic@univie.ac.at.

  7. Naturally selecting solutions: the use of genetic algorithms in bioinformatics.

    PubMed

    Manning, Timmy; Sleator, Roy D; Walsh, Paul

    2013-01-01

    For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.

  8. Bioinformatics in Africa: The Rise of Ghana?

    PubMed Central

    Karikari, Thomas K.

    2015-01-01

    Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics. PMID:26378921

  9. Our Built and Natural Environments

    EPA Pesticide Factsheets

    Our Built and Natural Environments summarizes research that shows how development patterns affect the environment and human health, and how certain development patterns can reduce the environmental and human health impacts of development.

  10. Smart built-in test

    NASA Technical Reports Server (NTRS)

    Richards, Dale W.

    1990-01-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  11. Smart built-in test

    NASA Astrophysics Data System (ADS)

    Richards, Dale W.

    1990-03-01

    The work which built-in test (BIT) is asked to perform in today's electronic systems increases with every insertion of new technology or introduction of tighter performance criteria. Yet the basic purpose remains unchanged -- to determine with high confidence the operational capability of that equipment. Achievement of this level of BIT performance requires the management and assimilation of a large amount of data, both realtime and historical. Smart BIT has taken advantage of advanced techniques from the field of artificial intelligence (AI) in order to meet these demands. The Smart BIT approach enhances traditional functional BIT by utilizing AI techniques to incorporate environmental stress data, temporal BIT information and maintenance data, and realtime BIT reports into an integrated test methodology for increased BIT effectiveness and confidence levels. Future research in this area will incorporate onboard fault-logging of BIT output, stress data and Smart BIT decision criteria in support of a singular, integrated and complete test and maintenance capability. The state of this research is described along with a discussion of directions for future development.

  12. Reliability-oriented bioinformatic networks visualization.

    PubMed

    Aladağ, Ahmet Emre; Erten, Cesim; Sözdinler, Melih

    2011-06-01

    We present our protein-protein interaction (PPI) network visualization system RobinViz (reliability-oriented bioinformatic networks visualization). Clustering the PPI network based on gene ontology (GO) annotations or biclustered gene expression data, providing a clustered visualization model based on a central/peripheral duality, computing layouts with algorithms specialized for interaction reliabilities represented as weights, completely automated data acquisition, processing are notable features of the system. RobinViz is a free, open-source software protected under GPL. It is written in C++ and Python, and consists of almost 30 000 lines of code, excluding the employed libraries. Source code, user manual and other Supplementary Material are available for download at http://code.google.com/p/robinviz/.

  13. Bioinformatics for personal genome interpretation.

    PubMed

    Capriotti, Emidio; Nehrt, Nathan L; Kann, Maricel G; Bromberg, Yana

    2012-07-01

    An international consortium released the first draft sequence of the human genome 10 years ago. Although the analysis of this data has suggested the genetic underpinnings of many diseases, we have not yet been able to fully quantify the relationship between genotype and phenotype. Thus, a major current effort of the scientific community focuses on evaluating individual predispositions to specific phenotypic traits given their genetic backgrounds. Many resources aim to identify and annotate the specific genes responsible for the observed phenotypes. Some of these use intra-species genetic variability as a means for better understanding this relationship. In addition, several online resources are now dedicated to collecting single nucleotide variants and other types of variants, and annotating their functional effects and associations with phenotypic traits. This information has enabled researchers to develop bioinformatics tools to analyze the rapidly increasing amount of newly extracted variation data and to predict the effect of uncharacterized variants. In this work, we review the most important developments in the field--the databases and bioinformatics tools that will be of utmost importance in our concerted effort to interpret the human variome.

  14. [Bioinformatics: a key role in oncology].

    PubMed

    Olivier, Timothée; Chappuis, Pierre; Tsantoulis, Petros

    2016-05-18

    Bioinformatics is essential in clinical oncology and research. Combining biology, computer science and mathematics, bioinformatics aims to derive useful information from clinical and biological data, often poorly structured, at a large scale. Bioinformatics approaches have reclassified certain cancers based on their molecular and biological presentation, improving treatment selection. Many molecular signatures have been developed and, after validation, some are now usable in clinical practice. Other applications could facilitate daily practice, reduce the risk of error and increase the precision of medical decision-making. Bioinformatics must evolve in accordance with ethical considerations and requires multidisciplinary collaboration. Its application depends on a sound technical foundation that meets strict quality requirements.

  15. Multichannel Analyzer Built from a Microcomputer.

    ERIC Educational Resources Information Center

    Spencer, C. D.; Mueller, P.

    1979-01-01

    Describes a multichannel analyzer built using eight-bit S-100 bus microcomputer hardware. The output modes are an oscilloscope display, print data, and send data to another computer. Discusses the system's hardware, software, costs, and advantages relative to commercial multichannels. (Author/GA)

  16. Multichannel Analyzer Built from a Microcomputer.

    ERIC Educational Resources Information Center

    Spencer, C. D.; Mueller, P.

    1979-01-01

    Describes a multichannel analyzer built using eight-bit S-100 bus microcomputer hardware. The output modes are an oscilloscope display, print data, and send data to another computer. Discusses the system's hardware, software, costs, and advantages relative to commercial multichannels. (Author/GA)

  17. Nuclear reactors built, being built, or planned, 1991

    SciTech Connect

    Simpson, B.

    1992-07-01

    This document contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1991. The book is divided into three major sections: Section 1 consists of a reactor locator map and reactor tables; Section 2 includes nuclear reactors that are operating, being built, or planned; and Section 3 includes reactors that have been shut down permanently or dismantled. Sections 2 and 3 contain the following classification of reactors: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is an American company -- working either independently or in cooperation with a foreign company (Part 4, in each section). Critical assembly refers to an assembly of fuel and assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  18. Nuclear reactors built, being built, or planned 1996

    SciTech Connect

    1997-08-01

    This publication contains unclassified information about facilities, built, being built, or planned in the United States for domestic use or export as of December 31, 1996. The Office of Scientific and Technical Information, U.S. Department of Energy, gathers this information annually from Washington headquarters, and field offices of DOE; from the U.S. Nuclear Regulatory Commission (NRC); from the U. S. reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from U.S. and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled.

  19. A Scientific Software Product Line for the Bioinformatics domain.

    PubMed

    Costa, Gabriella Castro B; Braga, Regina; David, José Maria N; Campos, Fernanda

    2015-08-01

    Most specialized users (scientists) that use bioinformatics applications do not have suitable training on software development. Software Product Line (SPL) employs the concept of reuse considering that it is defined as a set of systems that are developed from a common set of base artifacts. In some contexts, such as in bioinformatics applications, it is advantageous to develop a collection of related software products, using SPL approach. If software products are similar enough, there is the possibility of predicting their commonalities, differences and then reuse these common features to support the development of new applications in the bioinformatics area. This paper presents the PL-Science approach which considers the context of SPL and ontology in order to assist scientists to define a scientific experiment, and to specify a workflow that encompasses bioinformatics applications of a given experiment. This paper also focuses on the use of ontologies to enable the use of Software Product Line in biological domains. In the context of this paper, Scientific Software Product Line (SSPL) differs from the Software Product Line due to the fact that SSPL uses an abstract scientific workflow model. This workflow is defined according to a scientific domain and using this abstract workflow model the products (scientific applications/algorithms) are instantiated. Through the use of ontology as a knowledge representation model, we can provide domain restrictions as well as add semantic aspects in order to facilitate the selection and organization of bioinformatics workflows in a Scientific Software Product Line. The use of ontologies enables not only the expression of formal restrictions but also the inferences on these restrictions, considering that a scientific domain needs a formal specification. This paper presents the development of the PL-Science approach, encompassing a methodology and an infrastructure, and also presents an approach evaluation. This evaluation

  20. Sequence database versioning for command line and Galaxy bioinformatics servers.

    PubMed

    Dooley, Damion M; Petkau, Aaron J; Van Domselaar, Gary; Hsiao, William W L

    2016-04-15

    There are various reasons for rerunning bioinformatics tools and pipelines on sequencing data, including reproducing a past result, validation of a new tool or workflow using a known dataset, or tracking the impact of database changes. For identical results to be achieved, regularly updated reference sequence databases must be versioned and archived. Database administrators have tried to fill the requirements by supplying users with one-off versions of databases, but these are time consuming to set up and are inconsistent across resources. Disk storage and data backup performance has also discouraged maintaining multiple versions of databases since databases such as NCBI nr can consume 50 Gb or more disk space per version, with growth rates that parallel Moore's law. Our end-to-end solution combines our own Kipper software package-a simple key-value large file versioning system-with BioMAJ (software for downloading sequence databases), and Galaxy (a web-based bioinformatics data processing platform). Available versions of databases can be recalled and used by command-line and Galaxy users. The Kipper data store format makes publishing curated FASTA databases convenient since in most cases it can store a range of versions into a file marginally larger than the size of the latest version. Kipper v1.0.0 and the Galaxy Versioned Data tool are written in Python and released as free and open source software available at https://github.com/Public-Health-Bioinformatics/kipper and https://github.com/Public-Health-Bioinformatics/versioned_data, respectively; detailed setup instructions can be found at https://github.com/Public-Health-Bioinformatics/versioned_data/blob/master/doc/setup.md : Damion.Dooley@Bccdc.Ca or William.Hsiao@Bccdc.CaSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  1. Atlas - a data warehouse for integrative bioinformatics.

    PubMed

    Shah, Sohrab P; Huang, Yong; Xu, Tao; Yuen, Macaire M S; Ling, John; Ouellette, B F Francis

    2005-02-21

    We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL) calls that are implemented in a set of Application Programming Interfaces (APIs). The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD), Biomolecular Interaction Network Database (BIND), Database of Interacting Proteins (DIP), Molecular Interactions Database (MINT), IntAct, NCBI Taxonomy, Gene Ontology (GO), Online Mendelian Inheritance in Man (OMIM), LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First, Atlas stores data of similar types using common data

  2. Genomics and Bioinformatics Resources for Crop Improvement

    PubMed Central

    Mochida, Keiichi; Shinozaki, Kazuo

    2010-01-01

    Recent remarkable innovations in platforms for omics-based research and application development provide crucial resources to promote research in model and applied plant species. A combinatorial approach using multiple omics platforms and integration of their outcomes is now an effective strategy for clarifying molecular systems integral to improving plant productivity. Furthermore, promotion of comparative genomics among model and applied plants allows us to grasp the biological properties of each species and to accelerate gene discovery and functional analyses of genes. Bioinformatics platforms and their associated databases are also essential for the effective design of approaches making the best use of genomic resources, including resource integration. We review recent advances in research platforms and resources in plant omics together with related databases and advances in technology. PMID:20208064

  3. Development of computations in bioscience and bioinformatics and its application: review of the Symposium of Computations in Bioinformatics and Bioscience (SCBB06).

    PubMed

    Deng, Youping; Ni, Jun; Zhang, Chaoyang

    2006-12-12

    The first symposium of computations in bioinformatics and bioscience (SCBB06) was held in Hangzhou, China on June 21-22, 2006. Twenty-six peer-reviewed papers were selected for publication in this special issue of BMC Bioinformatics. These papers cover a broad range of topics including bioinformatics theories, algorithms, applications and tool development. The main technical topics contain gene expression analysis, sequence analysis, genome analysis, phylogenetic analysis, gene function prediction, molecular interaction and system biology, genetics and population study, immune strategy, protein structure prediction and proteomics.

  4. Biology in 'silico': The Bioinformatics Revolution.

    ERIC Educational Resources Information Center

    Bloom, Mark

    2001-01-01

    Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…

  5. Biology in 'silico': The Bioinformatics Revolution.

    ERIC Educational Resources Information Center

    Bloom, Mark

    2001-01-01

    Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…

  6. Fuzzy Logic in Medicine and Bioinformatics

    PubMed Central

    Torres, Angela; Nieto, Juan J.

    2006-01-01

    The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions) and in bioinformatics (comparison of genomes). PMID:16883057

  7. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    ERIC Educational Resources Information Center

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  8. Rapid Development of Bioinformatics Education in China

    ERIC Educational Resources Information Center

    Zhong, Yang; Zhang, Xiaoyan; Ma, Jian; Zhang, Liang

    2003-01-01

    As the Human Genome Project experiences remarkable success and a flood of biological data is produced, bioinformatics becomes a very "hot" cross-disciplinary field, yet experienced bioinformaticians are urgently needed worldwide. This paper summarises the rapid development of bioinformatics education in China, especially related…

  9. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Cancer.gov

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  10. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    ERIC Educational Resources Information Center

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  11. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  12. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  13. Rapid Development of Bioinformatics Education in China

    ERIC Educational Resources Information Center

    Zhong, Yang; Zhang, Xiaoyan; Ma, Jian; Zhang, Liang

    2003-01-01

    As the Human Genome Project experiences remarkable success and a flood of biological data is produced, bioinformatics becomes a very "hot" cross-disciplinary field, yet experienced bioinformaticians are urgently needed worldwide. This paper summarises the rapid development of bioinformatics education in China, especially related…

  14. Bioinformatic approaches to interrogating vitamin D receptor signaling.

    PubMed

    Campbell, Moray J

    2017-09-15

    Bioinformatics applies unbiased approaches to develop statistically-robust insight into health and disease. At the global, or "20,000 foot" view bioinformatic analyses of vitamin D receptor (NR1I1/VDR) signaling can measure where the VDR gene or protein exerts a genome-wide significant impact on biology; VDR is significantly implicated in bone biology and immune systems, but not in cancer. With a more VDR-centric, or "2000 foot" view, bioinformatic approaches can interrogate events downstream of VDR activity. Integrative approaches can combine VDR ChIP-Seq in cell systems where significant volumes of publically available data are available. For example, VDR ChIP-Seq studies can be combined with genome-wide association studies to reveal significant associations to immune phenotypes. Similarly, VDR ChIP-Seq can be combined with data from Cancer Genome Atlas (TCGA) to infer the impact of VDR target genes in cancer progression. Therefore, bioinformatic approaches can reveal what aspects of VDR downstream networks are significantly related to disease or phenotype. Copyright © 2017 The Author. Published by Elsevier B.V. All rights reserved.

  15. The 2016 Bioinformatics Open Source Conference (BOSC)

    PubMed Central

    Harris, Nomi L.; Cock, Peter J.A.; Chapman, Brad; Fields, Christopher J.; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science. PMID:27781083

  16. Bioinformatics clouds for big data manipulation

    PubMed Central

    2012-01-01

    Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. PMID:23190475

  17. The 2016 Bioinformatics Open Source Conference (BOSC).

    PubMed

    Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.

  18. Schools Built with Fallout Shelter.

    ERIC Educational Resources Information Center

    Office of Civil Defense (DOD), Washington, DC.

    Fallout protection can be built into a school building with little or no additional cost, using areas that are in continual use in the normal functioning of the building. A general discussion of the principles of shelter design is given along with photographs, descriptions, drawings, and cost analysis for a number of recently constructed schools…

  19. Incorporating Genomics and Bioinformatics across the Life Sciences Curriculum

    SciTech Connect

    Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.

    2011-08-01

    courses or independent research projects requires infrastructure for organizing and assessing student work. Here, we present a new platform for faculty to keep current with the rapidly changing field of bioinformatics, the Integrated Microbial Genomes Annotation Collaboration Toolkit (IMG-ACT). It was developed by instructors from both research-intensive and predominately undergraduate institutions in collaboration with the Department of Energy-Joint Genome Institute (DOE-JGI) as a means to innovate and update undergraduate education and faculty development. The IMG-ACT program provides a cadre of tools, including access to a clearinghouse of genome sequences, bioinformatics databases, data storage, instructor course management, and student notebooks for organizing the results of their bioinformatic investigations. In the process, IMG-ACT makes it feasible to provide undergraduate research opportunities to a greater number and diversity of students, in contrast to the traditional mentor-to-student apprenticeship model for undergraduate research, which can be too expensive and time-consuming to provide for every undergraduate. The IMG-ACT serves as the hub for the network of faculty and students that use the system for microbial genome analysis. Open access of the IMG-ACT infrastructure to participating schools ensures that all types of higher education institutions can utilize it. With the infrastructure in place, faculty can focus their efforts on the pedagogy of bioinformatics, involvement of students in research, and use of this tool for their own research agenda. What the original faculty members of the IMG-ACT development team present here is an overview of how the IMG-ACT program has affected our development in terms of teaching and research with the hopes that it will inspire more faculty to get involved.

  20. A quick guide for building a successful bioinformatics community.

    PubMed

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D; Fuller, Jonathan C; Goecks, Jeremy; Mulder, Nicola J; Michaut, Magali; Ouellette, B F Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-02-01

    "Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB).

  1. Nuclear reactors built, being built, or planned, 1994

    SciTech Connect

    1995-07-01

    This document contains unclassified information about facilities built, being built, or planned in the United States for domestic use or export as of December 31, 1994. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; tables of data for reactors operating, being built, or planned; and tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is a US company -- working either independently or in cooperation with a foreign company (Part 4). Critical assembly refers to an assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  2. Nuclear reactors built, being built, or planned: 1995

    SciTech Connect

    1996-08-01

    This report contains unclassified information about facilities built, being built, or planned in the US for domestic use or export as of December 31, 1995. The Office of Scientific and Technical Information, US Department of Energy, gathers this information annually from Washington headquarters and field offices of DOE; from the US Nuclear Regulatory Commission (NRC); from the US reactor manufacturers who are the principal nuclear contractors for foreign reactor locations; from US and foreign embassies; and from foreign governmental nuclear departments. The book consists of three divisions, as follows: (1) a commercial reactor locator map and tables of the characteristic and statistical data that follow; a table of abbreviations; (2) tables of data for reactors operating, being built, or planned; and (3) tables of data for reactors that have been shut down permanently or dismantled. The reactors are subdivided into the following parts: Civilian, Production, Military, Export, and Critical Assembly. Export reactor refers to a reactor for which the principal nuclear contractor is a US company--working either independently or in cooperation with a foreign company (Part 4). Critical assembly refers to an assembly of fuel and moderator that requires an external source of neutrons to initiate and maintain fission. A critical assembly is used for experimental measurements (Part 5).

  3. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    PubMed

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  4. Hidden in the Middle: Culture, Value and Reward in Bioinformatics.

    PubMed

    Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul

    2016-01-01

    Bioinformatics - the so-called shotgun marriage between biology and computer science - is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised 'outputs' in academia are often defined and rewarded by discipline. Bioinformatics, as an interdisciplinary bricolage, incorporates experts from various disciplinary cultures with their own distinct ways of working. Perceived problems of interdisciplinarity include difficulties of making explicit knowledge that is practical, theoretical, or cognitive. But successful interdisciplinary research also depends on an understanding of disciplinary cultures and value systems, often only tacitly understood by members of the communities in question. In bioinformatics, the 'parent' disciplines have different value systems; for example, what is considered worthwhile research by computer scientists can be thought of as trivial by biologists, and vice versa. This paper concentrates on the problems of reward and recognition described by scientists working in academic bioinformatics in the United Kingdom. We highlight problems that are a consequence of its cross-cultural make-up, recognising that the mismatches in knowledge in this borderland take place not just at the level of the practical, theoretical, or epistemological, but also at the cultural level too. The trend in big, interdisciplinary science is towards multiple authors on a single paper; in bioinformatics this has created hybrid or fractional scientists who find they are being positioned not just in-between established disciplines but also in-between as middle authors or, worse still, left off papers altogether.

  5. Computational Biology and Bioinformatics in Nigeria

    PubMed Central

    Fatumo, Segun A.; Adoga, Moses P.; Ojo, Opeolu O.; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-01-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries. PMID:24763310

  6. Computational biology and bioinformatics in Nigeria.

    PubMed

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  7. Extending Asia Pacific bioinformatics into new realms in the "-omics" era.

    PubMed

    Ranganathan, Shoba; Eisenhaber, Frank; Tong, Joo Chuan; Tan, Tin Wee

    2009-12-03

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation dating back to 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 7-11, 2009 at Biopolis, Singapore. Besides bringing together scientists from the field of bioinformatics in this region, InCoB has actively engaged clinicians and researchers from the area of systems biology, to facilitate greater synergy between these two groups. InCoB2009 followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India), Hong Kong and Taipei (Taiwan), with InCoB2010 scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. The Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and symposia on Clinical Bioinformatics (CBAS), the Singapore Symposium on Computational Biology (SYMBIO) and training tutorials were scheduled prior to the scientific meeting, and provided ample opportunity for in-depth learning and special interest meetings for educators, clinicians and students. We provide a brief overview of the peer-reviewed bioinformatics manuscripts accepted for publication in this supplement, grouped into thematic areas. In order to facilitate scientific reproducibility and accountability, we have, for the first time, introduced minimum information criteria for our pubilcations, including compliance to a Minimum Information about a Bioinformatics Investigation (MIABi). As the regional research expertise in bioinformatics matures, we have delineated a minimum set of bioinformatics skills required for addressing the computational challenges of the "-omics" era.

  8. When cloud computing meets bioinformatics: a review.

    PubMed

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  9. Built Environment Wind Turbine Roadmap

    SciTech Connect

    Smith, J.; Forsyth, T.; Sinclair, K.; Oteri, F.

    2012-11-01

    The market currently encourages BWT deployment before the technology is ready for full-scale commercialization. To address this issue, industry stakeholders convened a Rooftop and Built-Environment Wind Turbine Workshop on August 11 - 12, 2010, at the National Wind Technology Center, located at the U.S. Department of Energy’s National Renewable Energy Laboratory in Boulder, Colorado. This report summarizes the workshop.

  10. BioWarehouse: a bioinformatics database warehouse toolkit

    PubMed Central

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D

    2006-01-01

    Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for

  11. Bioclipse: an open source workbench for chemo- and bioinformatics

    PubMed Central

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl ES

    2007-01-01

    Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at . PMID:17316423

  12. Bioclipse: an open source workbench for chemo- and bioinformatics.

    PubMed

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl E S

    2007-02-22

    There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no successful attempts have been made to integrate chemo- and bioinformatics into a single framework. Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  13. Response of mollusc assemblages to climate variability and anthropogenic activities: a 4000-year record from a shallow bar-built lagoon system.

    PubMed

    Cerrato, Robert M; Locicero, Philip V; Goodbred, Steven L

    2013-10-01

    With their position at the interface between land and ocean and their fragile nature, lagoons are sensitive to environmental change, and it is reasonable to expect these changes would be recorded in well-preserved taxa such as molluscs. To test this, the 4000-year history of molluscs in Great South Bay, a bar-built lagoon, was reconstructed from 24 vibracores. Using x-radiography to identify shell layers, faunal counts, shell condition, organic content, and sediment type were measured in 325 samples. Sample age was estimated by interpolating 40 radiocarbon dates. K-means cluster analysis identified three molluscan assemblages, corresponding to sand-associated and mud-associated groups, and the third associated with inlet areas. Redundancy and regression tree analyses indicated that significant transitions from the sand-associated to mud-associated assemblage occurred over large portions of the bay about 650 and 294 years bp. The first date corresponds to the transition from the Medieval Warm Period to the Little Ice Age; this change in climate reduced the frequency of strong storms, likely leading to reduced barrier island breaching, greater bay enclosure, and fine-grained sediment accumulation. The second date marks the initiation of clear cutting by European settlers, an activity that would have increased runoff of fine-grained material. The occurrence of the inlet assemblage in the western and eastern ends of the bay is consistent with a history of inlets in these areas, even though prior to Hurricane Sandy in 2012, no inlet was present in the eastern bay in almost 200 years. The mud dominant, Mulinia lateralis, is a bivalve often associated with environmental disturbances. Its increased frequency over the past 300 years suggests that disturbances are more common in the bay than in the past. Management activities maintaining the current barrier island state may be contributing to the sand-mud transition and to the bay's susceptibility to disturbances.

  14. Bioinformatics and the Undergraduate Curriculum Essay

    PubMed Central

    Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of bioinformatics as a new discipline has challenged many colleges and universities to keep current with their curricula, often in the face of static or dwindling resources. On the plus side, many bioinformatics modules and related databases and software programs are free and accessible online, and interdisciplinary partnerships between existing faculty members and their support staff have proved advantageous in such efforts. We present examples of strategies and methods that have been successfully used to incorporate bioinformatics content into undergraduate curricula. PMID:20810947

  15. Bioinformatics in Italy: BITS2011, the Eighth Annual Meeting of the Italian Society of Bioinformatics

    PubMed Central

    2012-01-01

    The BITS2011 meeting, held in Pisa on June 20-22, 2011, brought together more than 120 Italian researchers working in the field of Bioinformatics, as well as students in Bioinformatics, Computational Biology, Biology, Computer Sciences, and Engineering, representing a landscape of Italian bioinformatics research. This preface provides a brief overview of the meeting and introduces the peer-reviewed manuscripts that were accepted for publication in this Supplement. PMID:22536954

  16. No-boundary thinking in bioinformatics research

    PubMed Central

    2013-01-01

    Currently there are definitions from many agencies and research societies defining “bioinformatics” as deriving knowledge from computational analysis of large volumes of biological and biomedical data. Should this be the bioinformatics research focus? We will discuss this issue in this review article. We would like to promote the idea of supporting human-infrastructure (HI) with no-boundary thinking (NT) in bioinformatics (HINT). PMID:24192339

  17. New design incinerator being built

    SciTech Connect

    Not Available

    1980-09-01

    A $14 million garbage-burning facility is being built by Reedy Creek Utilities Co. in cooperation with DOE at Lake Buena Vista, Fla., on the edge of Walt Disney World. The nation's first large-volume slagging pyrolysis incinerator will burn municipal waste in a more beneficial way and supply 15% of the amusement park's energy demands. By studying the new incinerators slag-producing capabilities, engineers hope to design similar facilities for isolating low-level nuclear wastes in inert, rocklike slag.

  18. Impacts of bioinformatics to medicinal chemistry.

    PubMed

    Chou, Kuo-Chen

    2015-01-01

    Facing the explosive growth of biological sequence data, such as those of protein/peptide and DNA/RNA, generated in the post-genomic age, many bioinformatical and mathematical approaches as well as physicochemical concepts have been introduced to timely derive useful informations from these biological sequences, in order to stimulate the development of medical science and drug design. Meanwhile, because of the rapid penetrations from these disciplines, medicinal chemistry is currently undergoing an unprecedented revolution. In this minireview, we are to summarize the progresses by focusing on the following six aspects. (1) Use the pseudo amino acid composition or PseAAC to predict various attributes of protein/peptide sequences that are useful for drug development. (2) Use pseudo oligonucleotide composition or PseKNC to do the same for DNA/RNA sequences. (3) Introduce the multi-label approach to study those systems where the constituent elements bear multiple characters and functions. (4) Utilize the graphical rules and "wenxiang" diagrams to analyze complicated biomedical systems. (5) Recent development in identifying the interactions of drugs with its various types of target proteins in cellular networking. (6) Distorted key theory and its application in developing peptide drugs.

  19. Bioinformatics meets user-centred design: a perspective.

    PubMed

    Pavelin, Katrina; Cham, Jennifer A; de Matos, Paula; Brooksbank, Cath; Cameron, Graham; Steinbeck, Christoph

    2012-01-01

    Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI), and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD) strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.

  20. Bioinformatics Meets User-Centred Design: A Perspective

    PubMed Central

    Pavelin, Katrina; Cham, Jennifer A.; de Matos, Paula; Brooksbank, Cath; Cameron, Graham; Steinbeck, Christoph

    2012-01-01

    Designers have a saying that “the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years.” It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI), and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD) strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics. PMID:22807660

  1. Diagnostic biases in translational bioinformatics.

    PubMed

    Han, Henry

    2015-08-01

    With the surge of translational medicine and computational omics research, complex disease diagnosis is more and more relying on massive omics data-driven molecular signature detection. However, how to detect and prevent possible diagnostic biases in translational bioinformatics remains an unsolved problem despite its importance in the coming era of personalized medicine. In this study, we comprehensively investigate the diagnostic bias problem by analyzing benchmark gene array, protein array, RNA-Seq and miRNA-Seq data under the framework of support vector machines for different model selection methods. We further categorize the diagnostic biases into different types by conducting rigorous kernel matrix analysis and provide effective machine learning methods to conquer the diagnostic biases. In this study, we comprehensively investigate the diagnostic bias problem by analyzing benchmark gene array, protein array, RNA-Seq and miRNA-Seq data under the framework of support vector machines. We have found that the diagnostic biases happen for data with different distributions and SVM with different kernels. Moreover, we identify total three types of diagnostic biases: overfitting bias, label skewness bias, and underfitting bias in SVM diagnostics, and present corresponding reasons through rigorous analysis. Compared with the overfitting and underfitting biases, the label skewness bias is more challenging to detect and conquer because it can be easily confused as a normal diagnostic case from its deceptive accuracy. To tackle this problem, we propose a derivative component analysis based support vector machines to conquer the label skewness bias by achieving the rivaling clinical diagnostic results. Our studies demonstrate that the diagnostic biases are mainly caused by the three major factors, i.e. kernel selection, signal amplification mechanism in high-throughput profiling, and training data label distribution. Moreover, the proposed DCA-SVM diagnosis provides a

  2. Concentrated solar power in the built environment

    NASA Astrophysics Data System (ADS)

    Montenon, Alaric C.; Fylaktos, Nestor; Montagnino, Fabio; Paredes, Filippo; Papanicolas, Costas N.

    2017-06-01

    Solar concentration systems are usually deployed in large open spaces for electricity generation; they are rarely used to address the pressing energy needs of the built environment sector. Fresnel technology offers interesting and challenging CSP energy pathways suitable for the built environment, due to its relatively light weight (<30 kg.m-2) and low windage. The Cyprus Institute (CyI) and Consorzio ARCA are cooperating in such a research program; we report here the construction and integration of a 71kW Fresnel CSP system into the HVAC (Heating, Ventilation, and Air Conditioning) system of a recently constructed office & laboratory building, the Novel Technologies Laboratory (NTL). The multi-generative system will support cooling, heating and hot water production feeding the system of the NTL building, as a demonstration project, part of the STS-MED program (Small Scale Thermal Solar District Units for Mediterranean Communities) financed by the European Commission under the European Neighbourhood and Partnership Instrument (ENPI), CBCMED program.

  3. Regulatory bioinformatics for food and drug safety.

    PubMed

    Healy, Marion J; Tong, Weida; Ostroff, Stephen; Eichler, Hans-Georg; Patak, Alex; Neuspiel, Margaret; Deluyker, Hubert; Slikker, William

    2016-10-01

    "Regulatory Bioinformatics" strives to develop and implement a standardized and transparent bioinformatic framework to support the implementation of existing and emerging technologies in regulatory decision-making. It has great potential to improve public health through the development and use of clinically important medical products and tools to manage the safety of the food supply. However, the application of regulatory bioinformatics also poses new challenges and requires new knowledge and skill sets. In the latest Global Coalition on Regulatory Science Research (GCRSR) governed conference, Global Summit on Regulatory Science (GSRS2015), regulatory bioinformatics principles were presented with respect to global trends, initiatives and case studies. The discussion revealed that datasets, analytical tools, skills and expertise are rapidly developing, in many cases via large international collaborative consortia. It also revealed that significant research is still required to realize the potential applications of regulatory bioinformatics. While there is significant excitement in the possibilities offered by precision medicine to enhance treatments of serious and/or complex diseases, there is a clear need for further development of mechanisms to securely store, curate and share data, integrate databases, and standardized quality control and data analysis procedures. A greater understanding of the biological significance of the data is also required to fully exploit vast datasets that are becoming available. The application of bioinformatics in the microbiological risk analysis paradigm is delivering clear benefits both for the investigation of food borne pathogens and for decision making on clinically important treatments. It is recognized that regulatory bioinformatics will have many beneficial applications by ensuring high quality data, validated tools and standardized processes, which will help inform the regulatory science community of the requirements

  4. Data Mining for Grammatical Inference with Bioinformatics Criteria

    NASA Astrophysics Data System (ADS)

    López, Vivian F.; Aguilar, Ramiro; Alonso, Luis; Moreno, María N.; Corchado, Juan M.

    In this paper we describe both theoretical and practical results of a novel data mining process that combines hybrid techniques of association analysis and classical sequentiation algorithms of genomics to generate grammatical structures of a specific language. We used an application of a compilers generator system that allows the development of a practical application within the area of grammarware, where the concepts of the language analysis are applied to other disciplines, such as Bioinformatic. The tool allows the complexity of the obtained grammar to be measured automatically from textual data. A technique of incremental discovery of sequential patterns is presented to obtain simplified production rules, and compacted with bioinformatics criteria to make up a grammar.

  5. State of the nation in data integration for bioinformatics.

    PubMed

    Goble, Carole; Stevens, Robert

    2008-10-01

    Data integration is a perennial issue in bioinformatics, with many systems being developed and many technologies offered as a panacea for its resolution. The fact that it is still a problem indicates a persistence of underlying issues. Progress has been made, but we should ask "what lessons have been learnt?", and "what still needs to be done?" Semantic Web and Web 2.0 technologies are the latest to find traction within bioinformatics data integration. Now we can ask whether the Semantic Web, mashups, or their combination, have the potential to help. This paper is based on the opening invited talk by Carole Goble given at the Health Care and Life Sciences Data Integration for the Semantic Web Workshop collocated with WWW2007. The paper expands on that talk. We attempt to place some perspective on past efforts, highlight the reasons for success and failure, and indicate some pointers to the future.

  6. Legal issues for chem-bioinformatics models.

    PubMed

    Duardo-Sanchez, Aliuska; Gonzalez-Diaz, Humberto

    2013-01-01

    Chem-Bioinformatic models connect the chemical structure of drugs and/or targets (protein, gen, RNA, microorganism, tissue, disease...) with drug biological activity over this target. On the other hand, a systematic judicial framework is needed to provide appropriate and relevant guidance for addressing various computing techniques as applied to scientific research in biosciences frontiers. This article reviews both: the use of the predictions made with models for regulatory purposes and how to protect (in legal terms) the models of molecular systems per se, and the software used to seek them. First we review: i) models as a tool for regulatory purposes, ii) Organizations Involved with Validation of models, iii) Regulatory Guidelines and Documents for models, iv) Models for Human Health and Environmental Endpoint, and v) Difficulties to Validation of models, and other issues. Next, we focused on the legal protection of models and software; including: a short summary of topics, and methods for legal protection of computer software. We close the review with a section that treats the taxes in software use.

  7. Bioinformatic Approaches to Metabolic Pathways Analysis

    PubMed Central

    Maudsley, Stuart; Chadwick, Wayne; Wang, Liyun; Zhou, Yu; Martin, Bronwen; Park, Sung-Soo

    2015-01-01

    The growth and development in the last decade of accurate and reliable mass data collection techniques has greatly enhanced our comprehension of cell signaling networks and pathways. At the same time however, these technological advances have also increased the difficulty of satisfactorily analyzing and interpreting these ever-expanding datasets. At the present time, multiple diverse scientific communities including molecular biological, genetic, proteomic, bioinformatic, and cell biological, are converging upon a common endpoint, that is, the measurement, interpretation, and potential prediction of signal transduction cascade activity from mass datasets. Our ever increasing appreciation of the complexity of cellular or receptor signaling output and the structural coordination of intracellular signaling cascades has to some extent necessitated the generation of a new branch of informatics that more closely associates functional signaling effects to biological actions and even whole-animal phenotypes. The ability to untangle and hopefully generate theoretical models of signal transduction information flow from transmembrane receptor systems to physiological and pharmacological actions may be one of the greatest advances in cell signaling science. In this overview, we shall attempt to assist the navigation into this new field of cell signaling and highlight several methodologies and technologies to appreciate this exciting new age of signal transduction. PMID:21870222

  8. Evolution of web services in bioinformatics.

    PubMed

    Neerincx, Pieter B T; Leunissen, Jack A M

    2005-06-01

    Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.

  9. Bioinformatics for analysis of poxvirus genomes.

    PubMed

    Da Silva, Melissa; Upton, Chris

    2012-01-01

    In recent years, there have been numerous unprecedented technological advances in the field of molecular biology; these include DNA sequencing, mass spectrometry of proteins, and microarray analysis of mRNA transcripts. Perhaps, however, it is the area of genomics, which has now generated the complete genome sequences of more than 100 poxviruses, that has had the greatest impact on the average virology researcher because the DNA sequence data is in constant use in many different ways by almost all molecular virologists. As this data resource grows, so does the importance of the availability of databases and software tools to enable the bench virologist to work with and make use of this (valuable/expensive) DNA sequence information. Thus, providing researchers with intuitive software to first select and reformat genomics data from large databases, second, to compare/analyze genomics data, and third, to view and interpret large and complex sets of results has become pivotal in enabling progress to be made in modern virology. This chapter is directed at the bench virologist and describes the software required for a number of common bioinformatics techniques that are useful for comparing and analyzing poxvirus genomes. In a number of examples, we also highlight the Viral Orthologous Clusters database system and integrated tools that we developed for the management and analysis of complete viral genomes.

  10. Bioinformatic approaches to metabolic pathways analysis.

    PubMed

    Maudsley, Stuart; Chadwick, Wayne; Wang, Liyun; Zhou, Yu; Martin, Bronwen; Park, Sung-Soo

    2011-01-01

    The growth and development in the last decade of accurate and reliable mass data collection techniques has greatly enhanced our comprehension of cell signaling networks and pathways. At the same time however, these technological advances have also increased the difficulty of satisfactorily analyzing and interpreting these ever-expanding datasets. At the present time, multiple diverse scientific communities including molecular biological, genetic, proteomic, bioinformatic, and cell biological, are converging upon a common endpoint, that is, the measurement, interpretation, and potential prediction of signal transduction cascade activity from mass datasets. Our ever increasing appreciation of the complexity of cellular or receptor signaling output and the structural coordination of intracellular signaling cascades has to some extent necessitated the generation of a new branch of informatics that more closely associates functional signaling effects to biological actions and even whole-animal phenotypes. The ability to untangle and hopefully generate theoretical models of signal transduction information flow from transmembrane receptor systems to physiological and pharmacological actions may be one of the greatest advances in cell signaling science. In this overview, we shall attempt to assist the navigation into this new field of cell signaling and highlight several methodologies and technologies to appreciate this exciting new age of signal transduction.

  11. Built-Environment Report Summary

    SciTech Connect

    Baring-Gould, Ian; Fields, Jason; Preus, Robert; Oteri, Frank

    2016-06-14

    Built-environment wind turbine (BEWT) projects are wind energy projects that are constructed on, in, or near buildings. These projects present an opportunity for distributed, low-carbon generation combined with highly visible statements on sustainability, but the BEWT niche of the wind industry is still developing and is relatively less mature than the utility-scale wind or conventional ground-based distributed wind sectors. The findings presented in this presentation cannot be extended to wind energy deployments in general because of the large difference in application and technology maturity. This presentation summarizes the results of a report investigating the current state of the BEWT industry by reviewing available literature on BEWT projects as well as interviewing project owners on their experiences deploying and operating the technology. The authors generated a series of case studies that outlines the pertinent project details, project outcomes, and lessons learned.

  12. Bioinformatics process management: information flow via a computational journal

    PubMed Central

    Feagan, Lance; Rohrer, Justin; Garrett, Alexander; Amthauer, Heather; Komp, Ed; Johnson, David; Hock, Adam; Clark, Terry; Lushington, Gerald; Minden, Gary; Frost, Victor

    2007-01-01

    This paper presents the Bioinformatics Computational Journal (BCJ), a framework for conducting and managing computational experiments in bioinformatics and computational biology. These experiments often involve series of computations, data searches, filters, and annotations which can benefit from a structured environment. Systems to manage computational experiments exist, ranging from libraries with standard data models to elaborate schemes to chain together input and output between applications. Yet, although such frameworks are available, their use is not widespread–ad hoc scripts are often required to bind applications together. The BCJ explores another solution to this problem through a computer based environment suitable for on-site use, which builds on the traditional laboratory notebook paradigm. It provides an intuitive, extensible paradigm designed for expressive composition of applications. Extensive features facilitate sharing data, computational methods, and entire experiments. By focusing on the bioinformatics and computational biology domain, the scope of the computational framework was narrowed, permitting us to implement a capable set of features for this domain. This report discusses the features determined critical by our system and other projects, along with design issues. We illustrate the use of our implementation of the BCJ on two domain-specific examples. PMID:18053179

  13. A comparison of common programming languages used in bioinformatics.

    PubMed

    Fourment, Mathieu; Gillings, Michael R

    2008-02-05

    The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.

  14. Platinum-related deep levels in silicon and their passivation by atomic hydrogen using a home-built automated DLTS system

    NASA Astrophysics Data System (ADS)

    Reddy, B. P. N.; Reddy, P. N.; Pandu Rangaiah, S. V.

    1996-09-01

    An inexpensive automated DLTS system has been developed in modular form consisting of modules such as a capacitance meter, pulse generator, DLTS system timing controller, data acquisition system, PID temperature controller, cryostat with LN2 flow control facility, etc. These modules, except the capacitance meter and pulse generator, have been designed and fabricated in the laboratory. Further they are integrated and interfaced to PC AT/386 computer. Software has been developed to run the spectrometer, collect data and off-line data processing for the deep level parameters such as activation energy, capture cross-section and density. The system has been used to study the deep levels of platinum in n-type silicon and their passivation by atomic hydrogen. The estimated activation energy of the two acceptor levels are Ec-0.280 eV and Ec-0.522 eV and their capture cross sections are 2.2E-15 cm-2 and 4.3E-15 cm-2 respectively. These levels are found to be reactivated when the hydrogenated samples are annealed in the temperature range 350 - 500 degrees Celsius. The mechanism of passivation and reactivation of these levels are discussed.

  15. A Novel Ideal Radionuclide Imaging System for Non-invasively Cell Monitoring built on Baculovirus Backbone by Introducing Sleeping Beauty Transposon

    PubMed Central

    Lv, Jing; Pan, Yu; Ju, Huijun; Zhou, Jinxin; Cheng, Dengfeng; Shi, Hongcheng; Zhang, Yifan

    2017-01-01

    Sleeping Beauty (SB) transposon is an attractive tool in stable transgene integration both in vitro and in vivo; and we introduced SB transposon into recombinant sodium-iodide symporter baculovirus system (Bac-NIS system) to facilitate long-term expression of recombinant sodium-iodide symporter. In our study, two hybrid baculovirus systems (Bac-eGFP-SB-NeoR and Bac-NIS-SB-NeoR) were successfully constructed and used to infect U87 glioma cells. After G418 selection screening, the Bac-eGFP-SB-NeoR-U87 cells remained eGFP positive, at the 18th and 196th day post transfection (96.03 ± 0.21% and 97.43 ± 0.81%), while eGFP positive population declined significantly at 18 days in cells transfected with unmodified baculovirus construct. NIS gene expression by Bac-NIS-SB-NeoR-U87 cells was also maintained for 28 weeks as determined by radioiodine uptake assay, reverse transcription-polymerase chain reaction (RT-PCR) and Western Blot (WB) assay. When transplanted in mice, Bac-NIS-SB-NeoR-U87 cells also expressed NIS gene stably as monitored by SPECT imaging for 43 days until the tumor-bearing mice were sacrificed. Herein, we showed that incorporation of SB in Bac-NIS system (hybrid Bac-NIS-SB-NeoR) can achieve a long-term transgene expression and can improve radionuclide imaging in cell tracking and monitoring in vivo. PMID:28262785

  16. A Novel Ideal Radionuclide Imaging System for Non-invasively Cell Monitoring built on Baculovirus Backbone by Introducing Sleeping Beauty Transposon.

    PubMed

    Lv, Jing; Pan, Yu; Ju, Huijun; Zhou, Jinxin; Cheng, Dengfeng; Shi, Hongcheng; Zhang, Yifan

    2017-03-06

    Sleeping Beauty (SB) transposon is an attractive tool in stable transgene integration both in vitro and in vivo; and we introduced SB transposon into recombinant sodium-iodide symporter baculovirus system (Bac-NIS system) to facilitate long-term expression of recombinant sodium-iodide symporter. In our study, two hybrid baculovirus systems (Bac-eGFP-SB-NeoR and Bac-NIS-SB-NeoR) were successfully constructed and used to infect U87 glioma cells. After G418 selection screening, the Bac-eGFP-SB-NeoR-U87 cells remained eGFP positive, at the 18(th) and 196(th) day post transfection (96.03 ± 0.21% and 97.43 ± 0.81%), while eGFP positive population declined significantly at 18 days in cells transfected with unmodified baculovirus construct. NIS gene expression by Bac-NIS-SB-NeoR-U87 cells was also maintained for 28 weeks as determined by radioiodine uptake assay, reverse transcription-polymerase chain reaction (RT-PCR) and Western Blot (WB) assay. When transplanted in mice, Bac-NIS-SB-NeoR-U87 cells also expressed NIS gene stably as monitored by SPECT imaging for 43 days until the tumor-bearing mice were sacrificed. Herein, we showed that incorporation of SB in Bac-NIS system (hybrid Bac-NIS-SB-NeoR) can achieve a long-term transgene expression and can improve radionuclide imaging in cell tracking and monitoring in vivo.

  17. Reverse Translational Bioinformatics: A Bioinformatics Assay Of Age, Gender And Clinical Biomarkers

    PubMed Central

    Fliss, Amit; Ragolsky, Micha; Rubin, Eitan

    2008-01-01

    In bioinformatics, clinical data is rarely used. Here, we propose using bedsidedata in basic research, via bioinformatics methodologies. To demonstrate the potential of this so called Reverse Translational Bioinformatics approach, classical bioinformatics tools were applied to blood biomarker information attained from a large scale, open-access cross sectional survey. The results of this analysis include a novel classification of blood biomarkers, critical ages in which basic biological processes may shift in humans, and a possible approach to exploring the gender specificity of these shifts. Changes in normal values were also shown to be non-linear, with most of the non-linearity attributed to the shift from growth to maturity. Together, these finding demonstrate that reversed translational bioinformatics may contribute to basic research. PMID:21347121

  18. Development and experimental evaluation of a thermography measurement system for real-time monitoring of comfort and heat rate exchange in the built environment

    NASA Astrophysics Data System (ADS)

    Revel, G. M.; Sabbatini, E.; Arnesano, M.

    2012-03-01

    A measurement system based on infrared (IR) thermovision technique (ITT) is developed for real-time estimation of room thermal variations and comfort conditions in office-type environment as a part of a feasibility study in the EU FP7 project ‘INTUBE’. An IR camera installed on the ceiling allows thermal image acquisition and post-processing is performed to derive mean surface temperatures, number of occupants and presence of other heat sources (e.g. computer) through detecting algorithms. A lumped parameter model of the room, developed in the Matlab/Simulink environment, receives as input the information extracted from image processing to compute room exchanged heat rate, air temperature and thermal comfort (PMV). The aim is to provide in real time the room thermal balance and comfort information for energy-saving purposes in an improved way with respect to traditional thermostats. Instantaneous information can be displayed for the users or eventually used for automatic HVAC control. The system is based on custom adaptation of a surveillance low-cost IR system with dedicated radiometric calibration. Experimental results show average absolute discrepancies in the order of 0.4 °C between calculated and measured air temperature during a time period of a day. A sensitivity analysis is performed in order to identify main uncertainty sources.

  19. A Guide to Bioinformatics for Immunologists

    PubMed Central

    Whelan, Fiona J.; Yap, Nicholas V. L.; Surette, Michael G.; Golding, G. Brian; Bowdish, Dawn M. E.

    2013-01-01

    Bioinformatics includes a suite of methods, which are cheap, approachable, and many of which are easily accessible without any sort of specialized bioinformatic training. Yet, despite this, bioinformatic tools are under-utilized by immunologists. Herein, we review a representative set of publicly available, easy-to-use bioinformatic tools using our own research on an under-annotated human gene, SCARA3, as an example. SCARA3 shares an evolutionary relationship with the class A scavenger receptors, but preliminary research showed that it was divergent enough that its function remained unclear. In our quest for more information about this gene – did it share gene sequence similarities to other scavenger receptors? Did it contain conserved protein domains? Where was it expressed in the human body? – we discovered the power and informative potential of publicly available bioinformatic tools designed for the novice in mind, which allowed us to hypothesize on the regulation, structure, and function of this protein. We argue that these tools are largely applicable to many facets of immunology research. PMID:24363654

  20. Carving a niche: establishing bioinformatics collaborations

    PubMed Central

    Lyon, Jennifer A.; Tennant, Michele R.; Messner, Kevin R.; Osterbur, David L.

    2006-01-01

    Objectives: The paper describes collaborations and partnerships developed between library bioinformatics programs and other bioinformatics-related units at four academic institutions. Methods: A call for information on bioinformatics partnerships was made via email to librarians who have participated in the National Center for Biotechnology Information's Advanced Workshop for Bioinformatics Information Specialists. Librarians from Harvard University, the University of Florida, the University of Minnesota, and Vanderbilt University responded and expressed willingness to contribute information on their institutions, programs, services, and collaborating partners. Similarities and differences in programs and collaborations were identified. Results: The four librarians have developed partnerships with other units on their campuses that can be categorized into the following areas: knowledge management, instruction, and electronic resource support. All primarily support freely accessible electronic resources, while other campus units deal with fee-based ones. These demarcations are apparent in resource provision as well as in subsequent support and instruction. Conclusions and Recommendations: Through environmental scanning and networking with colleagues, librarians who provide bioinformatics support can develop fruitful collaborations. Visibility is key to building collaborations, as is broad-based thinking in terms of potential partners. PMID:16888668

  1. Functional informatics: convergence and integration of automation and bioinformatics.

    PubMed

    Ilyin, Sergey E; Bernal, Alejandro; Horowitz, Daniel; Derian, Claudia K; Xin, Hong

    2004-09-01

    The biopharmaceutical industry is currently being presented with opportunities to improve research and business efficiency via automation and the integration of various systems. In the examples discussed, industrial high-throughput screening systems are integrated with functional tools and bioinformatics to facilitate target and biomarker identification and validation. These integrative functional approaches generate value-added opportunities by leveraging available automation and information technologies into new applications that are broadly applicable to different types of projects, and by improving the overall research and development and business efficiency via the integration of various systems.

  2. Automated programming for bioinformatics algorithm deployment.

    PubMed

    Alterovitz, Gil; Jiwaji, Adnaan; Ramoni, Marco F

    2008-02-01

    Many bioinformatics solutions suffer from the lack of usable interface/platform from which results can be analyzed and visualized. Overcoming this hurdle would allow for more widespread dissemination of bioinformatics algorithms within the biological and medical communities. The algorithms should be accessible without extensive technical support or programming knowledge. Here, we propose a dynamic wizard platform that provides users with a Graphical User Interface (GUI) for most Java bioinformatics library toolkits. The application interface is generated in real-time based on the original source code. This platform lets developers focus on designing algorithms and biologists/physicians on testing hypotheses and analyzing results. The open source code can be downloaded from: http://bcl.med.harvard.edu/proteomics/proj/APBA/.

  3. Penalized feature selection and classification in bioinformatics

    PubMed Central

    Huang, Jian

    2008-01-01

    In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classification techniques—which belong to the family of embedded feature selection methods—for bioinformatics studies with high-dimensional input. Classification objective functions, penalty functions and computational algorithms are discussed. Our goal is to make interested researchers aware of these feature selection and classification methods that are applicable to high-dimensional bioinformatics data. PMID:18562478

  4. Bioinformatics methods for the analysis of hepatitis viruses.

    PubMed

    Moriconi, Francesco; Beard, Michael R; Yuen, Lilly Kw

    2013-01-01

    HBV and HCV are the only hepatotropic viruses capable of establishing chronic infections. More than 500 million people worldwide are estimated to have chronic infections with HBV and/or HCV, and they have an increased risk of developing liver complications, such as cirrhosis or hepatocellular carcinoma. During the past decade, several antiviral agents including immune-modulatory drugs and nucleoside/nucleotide analogues have been approved for the treatment of HBV and HCV infections. In recent years, the focus has been on the development of new and better therapeutic agents for management of chronic HCV infections. Bioinformatics has only been applied recently to the field of viral hepatitis research. In addition to the wide range of general tools freely available for identification of open reading frames, gene prediction, homology searching, sequence alignment, and motif and epitope recognition, several public database systems designed specifically for HBV and HCV research have now been developed. The focus of these databases ranged from being viral sequence repositories for the provision of bioinformatics tools for viral genome analysis, as well as HBV or HCV drug resistance prediction. This review provides an overview of these public databases, which have integrated bioinformatics tools for HBV and HCV research. Properly managed and developed, these databases have the potential to have a broad effect on hepatitis research and treatment strategies. However, the effect will depend on the comprehensive collection of not only molecular sequence data, but also anonymous patient clinical and treatment data.

  5. A Quick Guide for Building a Successful Bioinformatics Community

    PubMed Central

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-01-01

    “Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371

  6. Modular Zero Energy. BrightBuilt Home

    SciTech Connect

    Aldrich, Robb; Butterfield, Karla

    2016-03-01

    Kaplan Thompson Architects (KTA) has specialized in sustainable, energy-efficient buildings, and they have designed several custom, zero-energy homes in New England. These zero-energy projects have generally been high-end, custom homes with budgets that could accommodate advanced energy systems. In an attempt to make zero energy homes more affordable and accessible to a larger demographic, KTA explored modular construction as way to provide high-quality homes at lower costs. In the mid-2013, KTA formalized this concept when they launched BrightBuilt Home (BBH). The BBH mission is to offer a line of architect-designed, high-performance homes that are priced to offer substantial savings off the lifetime cost of a typical home and can be delivered in less time. For the past two years, CARB has worked with BBH and Keiser Homes (the primary modular manufacturer for BBH) to discuss challenges related to wall systems, HVAC, and quality control. In Spring of 2014, CARB and BBH began looking in detail on a home to be built in Lincolnville, ME by Black Bros. Builders. This report details the solution package specified for this modular plan and the challenges that arose during the project.

  7. Built-in self test

    NASA Astrophysics Data System (ADS)

    Jansen, B.; Vandegoor, A. J.

    1988-11-01

    Because of the increasing complexity of digital circuits, it is becoming more and more difficult to determine whether a circuit is correct or faulty. Faults in a circuit can hardly be detected just by looking at the outside what the reaction of the circuit is to a certain input sequence. Fault tolerant computing can be a solution. Built-In Self Test (BIST) techniques can also be used to verify whether the circuit is correct, not only during normal operation, but also during the early development periods. The result of using BIST techniques is a considerable reduction of time between design and the final product, and a reduction of maintenance time and cost. BIST is a test method of which the circuit can separate itself from the surrounding logic, and perform a test. After the self test, the circuit reports to the surrounding logic whether it is correct. The advantage of BIST is that it is a universal and systematic test method with a solid mathematical foundation. Based on the stuck-at fault model, it is possible to compute the fault coverage, which is the number of faults detected by the BIST method. The theory of BIST is described. A circuit is divided into combinational and sequential parts, which are tested separately. The sequential parts are tested with a so-called scan-path test. Alternative test methods to test the combinational parts are described. The method to compute the number of patterns needed to detect all faults with a certain probability as function of complexity of the circuit is given. The theory of CRC signature analyzers, and the probability of masking are also described and illustrated with some examples, which can directly be used in practice.

  8. Systems Biology and Bioinformatics in Medical Applications

    DTIC Science & Technology

    2009-10-01

    healthcare
fa cilities.
The
importance
of
A.
baumannii
infections
in
war‐related
injuries
is
well
established.
A.
baumannii
was
the
mo st
common
 gram ... gram -negative, nonmotile, nonfastidious member of the family Moraxellaceae within the order Pseudomonadales. A. baumannii is best known for causing...most common gram -negative bacillus recovered from traumatic injuries to the lower extremities during the Vietnam War 51. A new series of infections

  9. SYMBIOmatics: synergies in Medical Informatics and Bioinformatics--exploring current scientific literature for emerging topics.

    PubMed

    Rebholz-Schuhman, Dietrich; Cameron, Graham; Clark, Dominic; van Mulligen, Erik; Coatrieux, Jean-Louis; Del Hoyo Barbolla, Eva; Martin-Sanchez, Fernando; Milanesi, Luciano; Porro, Ivan; Beltrame, Francesco; Tollis, Ioannis; Van der Lei, Johan

    2007-03-08

    The SYMBIOmatics Specific Support Action (SSA) is "an information gathering and dissemination activity" that seeks "to identify synergies between the bioinformatics and the medical informatics" domain to improve collaborative progress between both domains (ref. to http://www.symbiomatics.org). As part of the project experts in both research fields will be identified and approached through a survey. To provide input to the survey, the scientific literature was analysed to extract topics relevant to both medical informatics and bioinformatics. This paper presents results of a systematic analysis of the scientific literature from medical informatics research and bioinformatics research. In the analysis pairs of words (bigrams) from the leading bioinformatics and medical informatics journals have been used as indication of existing and emerging technologies and topics over the period 2000-2005 ("recent") and 1990-1990 ("past"). We identified emerging topics that were equally important to bioinformatics and medical informatics in recent years such as microarray experiments, ontologies, open source, text mining and support vector machines. Emerging topics that evolved only in bioinformatics were system biology, protein interaction networks and statistical methods for microarray analyses, whereas emerging topics in medical informatics were grid technology and tissue microarrays. We conclude that although both fields have their own specific domains of interest, they share common technological developments that tend to be initiated by new developments in biotechnology and computer science.

  10. SYMBIOmatics: Synergies in Medical Informatics and Bioinformatics – exploring current scientific literature for emerging topics

    PubMed Central

    Rebholz-Schuhman, Dietrich; Cameron, Graham; Clark, Dominic; van Mulligen, Erik; Coatrieux, Jean-Louis; Del Hoyo Barbolla, Eva; Martin-Sanchez, Fernando; Milanesi, Luciano; Porro, Ivan; Beltrame, Francesco; Tollis, Ioannis; Van der Lei, Johan

    2007-01-01

    Background The SYMBIOmatics Specific Support Action (SSA) is "an information gathering and dissemination activity" that seeks "to identify synergies between the bioinformatics and the medical informatics" domain to improve collaborative progress between both domains (ref. to ). As part of the project experts in both research fields will be identified and approached through a survey. To provide input to the survey, the scientific literature was analysed to extract topics relevant to both medical informatics and bioinformatics. Results This paper presents results of a systematic analysis of the scientific literature from medical informatics research and bioinformatics research. In the analysis pairs of words (bigrams) from the leading bioinformatics and medical informatics journals have been used as indication of existing and emerging technologies and topics over the period 2000–2005 ("recent") and 1990–1990 ("past"). We identified emerging topics that were equally important to bioinformatics and medical informatics in recent years such as microarray experiments, ontologies, open source, text mining and support vector machines. Emerging topics that evolved only in bioinformatics were system biology, protein interaction networks and statistical methods for microarray analyses, whereas emerging topics in medical informatics were grid technology and tissue microarrays. Conclusion We conclude that although both fields have their own specific domains of interest, they share common technological developments that tend to be initiated by new developments in biotechnology and computer science. PMID:17430562

  11. Bioinformatics: A History of Evolution "In Silico"

    ERIC Educational Resources Information Center

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  12. Bioinformatics: A History of Evolution "In Silico"

    ERIC Educational Resources Information Center

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  13. Bioboxes: standardised containers for interchangeable bioinformatics software.

    PubMed

    Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D

    2015-01-01

    Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.

  14. Medical informatics and bioinformatics: a bibliometric study

    PubMed Central

    Bansard, Jean-Yves; Rebholz-Schuhman, Dietrich; Cameron, Graham; Clark, Dominic; van Mulligen, Erik; Beltrame, Francesco; Del Hoyo Barbolla, Eva; Martin-Sanchez, Fernando; Milanesi, Luciano; Tollis, Ioannis; Van der Lei, Johan; Coatrieux, Jean-Louis

    2007-01-01

    This paper reports on an analysis of the bioinformatics and medical informatics literature with the objective to identify upcoming trends that are shared among both research fields to derive benefits from potential collaborative initiatives for their future. Our results present the main characteristics of the two fields and show that these domains are still relatively separated. PMID:17521073

  15. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    ERIC Educational Resources Information Center

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  16. "Extreme Programming" in a Bioinformatics Class

    ERIC Educational Resources Information Center

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  17. 2010 Translational bioinformatics year in review

    PubMed Central

    Miller, Katharine S

    2011-01-01

    A review of 2010 research in translational bioinformatics provides much to marvel at. We have seen notable advances in personal genomics, pharmacogenetics, and sequencing. At the same time, the infrastructure for the field has burgeoned. While acknowledging that, according to researchers, the members of this field tend to be overly optimistic, the authors predict a bright future. PMID:21672905

  18. Bioinformatics in Undergraduate Education: Practical Examples

    ERIC Educational Resources Information Center

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  19. Implementing bioinformatic workflows within the bioextract server

    USDA-ARS?s Scientific Manuscript database

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  20. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    ERIC Educational Resources Information Center

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  1. Bioinformatics in Undergraduate Education: Practical Examples

    ERIC Educational Resources Information Center

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  2. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    EPA Science Inventory

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  3. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    EPA Science Inventory

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  4. Built-in lens mask lithography

    NASA Astrophysics Data System (ADS)

    Ueda, Naoki; Sasago, Masaru; Misaka, Akio; Kikuta, Hisao; Kawata, Hiroaki; Hirai, Yoshihiko

    2014-03-01

    Cost effective micro lithography tool is demanded for fine micro devices. However, resolution of a conventional proximity exposure system is not sufficient below several micron feature size for deep focus depth. On the other hand, a reduction projection system is sufficient to resolve it but the cost of the tool is too much high compared to proximity exposure systems. To enhance the resolution of photolithography, there has been proposed a number of novel methods beside shorting of wave length. Some of them are utilized in current advanced lithography systems, for example, the immersion lithography1 enhances effective NA and the phase shift mask2 improves optical transmittance function. However, those advanced technology is mainly focused on improvement for advanced projection exposure systems for ultra-fine lithography. On the other hand, coherence holography pattering is recently proposed and expected for 3-dimentional pattering3-5. Also, Talbot lithography6-8 is studied for periodical micro and nano pattering. Those novels pattering are based on wave propagation due to optical diffraction without using expensive optical lens systems. In this paper we newly propose novel optical lithography using built-in lens mask to enhance resolution and focus depth in conventional proximity exposure system for micro lithographic application without lens systems. The performance is confirmed by simulation and experimental works.

  5. Navigating the changing learning landscape: perspective from bioinformatics.ca

    PubMed Central

    Ouellette, B. F. Francis

    2013-01-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs. PMID:23515468

  6. Navigating the changing learning landscape: perspective from bioinformatics.ca.

    PubMed

    Brazas, Michelle D; Ouellette, B F Francis

    2013-09-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.

  7. A Bioinformatics Reference Model: Towards a Framework for Developing and Organising Bioinformatic Resources

    NASA Astrophysics Data System (ADS)

    Hiew, Hong Liang; Bellgard, Matthew

    2007-11-01

    Life Science research faces the constant challenge of how to effectively handle an ever-growing body of bioinformatics software and online resources. The users and developers of bioinformatics resources have a diverse set of competing demands on how these resources need to be developed and organised. Unfortunately, there does not exist an adequate community-wide framework to integrate such competing demands. The problems that arise from this include unstructured standards development, the emergence of tools that do not meet specific needs of researchers, and often times a communications gap between those who use the tools and those who supply them. This paper presents an overview of the different functions and needs of bioinformatics stakeholders to determine what may be required in a community-wide framework. A Bioinformatics Reference Model is proposed as a basis for such a framework. The reference model outlines the functional relationship between research usage and technical aspects of bioinformatics resources. It separates important functions into multiple structured layers, clarifies how they relate to each other, and highlights the gaps that need to be addressed for progress towards a diverse, manageable, and sustainable body of resources. The relevance of this reference model to the bioscience research community, and its implications in progress for organising our bioinformatics resources, are discussed.

  8. Molecular machinery built from DNA

    NASA Astrophysics Data System (ADS)

    Bath, Jonathan; Turberfield, Andrew J.

    2013-03-01

    DNA can be used as both construction material and fuel for molecular motors. Systems of motors and tracks can be constructed and movement of the motor along the track can be directly observed. The path that a taken by a motor as it navigates a network of tracks can be programmed by instructions that are added externally or carried by the motor itself. Such systems might be used as part molecular assembly lines that can be dynamically reconfigured in response to changing demands.

  9. Bioinformatic characterization of plant networks

    SciTech Connect

    McDermott, Jason E.; Samudrala, Ram

    2008-06-30

    Cells and organisms are governed by networks of interactions, genetic, physical and metabolic. Large-scale experimental studies of interactions between components of biological systems have been performed for a variety of eukaryotic organisms. However, there is a dearth of such data for plants. Computational methods for prediction of relationships between proteins, primarily based on comparative genomics, provide a useful systems-level view of cellular functioning and can be used to extend information about other eukaryotes to plants. We have predicted networks for Arabidopsis thaliana, Oryza sativa indica and japonica and several plant pathogens using the Bioverse (http://bioverse.compbio.washington.edu) and show that they are similar to experimentally-derived interaction networks. Predicted interaction networks for plants can be used to provide novel functional annotations and predictions about plant phenotypes and aid in rational engineering of biosynthesis pathways.

  10. Biowep: a workflow enactment portal for bioinformatics applications.

    PubMed

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-03-08

    The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of

  11. Rabifier2: an improved bioinformatic classifier of Rab GTPases.

    PubMed

    Surkont, Jaroslaw; Diekmann, Yoan; Pereira-Leal, José B

    2016-10-22

    The Rab family of small GTPases regulates and provides specificity to the endomembrane trafficking system; each Rab subfamily is associated with specific pathways. Thus, characterization of Rab repertoires provides functional information about organisms and evolution of the eukaryotic cell. Yet, the complex structure of the Rab family limits the application of existing methods for protein classification. Here, we present a major redesign of the Rabifier, a bioinformatic pipeline for detection and classification of Rab GTPases. It is more accurate, significantly faster than the original version and is now open source, both the code and the data, allowing for community participation.

  12. Biowep: a workflow enactment portal for bioinformatics applications

    PubMed Central

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-01-01

    Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis

  13. The House That Jones Built.

    ERIC Educational Resources Information Center

    Rist, Marilee C.

    1992-01-01

    Describes lifelong commitment of middle-school principal and major W.J. Jones to Coahoma, a small town in Mississippi Delta. Thanks to his efforts, town recently acquired a sewage system, blacktopped roads, and new housing (through Habitat for Humanity and World Vision). Although town elementary school fell victim to consolidation and children are…

  14. Composable languages for bioinformatics: the NYoSh experiment

    PubMed Central

    Simi, Manuele

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is

  15. Composable languages for bioinformatics: the NYoSh experiment.

    PubMed

    Simi, Manuele; Campagne, Fabien

    2014-01-01

    Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is

  16. Squid - a simple bioinformatics grid.

    PubMed

    Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M

    2005-08-03

    BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.

  17. First results for custom-built low-temperature (4.2 K) scanning tunneling microscope/molecular beam epitaxy and pulsed laser epitaxy system designed for spin-polarized measurements

    NASA Astrophysics Data System (ADS)

    Foley, Andrew; Alam, Khan; Lin, Wenzhi; Wang, Kangkang; Chinchore, Abhijit; Corbett, Joseph; Savage, Alan; Chen, Tianjiao; Shi, Meng; Pak, Jeongihm; Smith, Arthur

    2014-03-01

    A custom low-temperature (4.2 K) scanning tunneling microscope system has been developed which is combined directly with a custom molecular beam epitaxy facility (and also including pulsed laser epitaxy) for the purpose of studying surface nanomagnetism of complex spintronic materials down to the atomic scale. For purposes of carrying out spin-polarized STM measurements, the microscope is built into a split-coil, 4.5 Tesla superconducting magnet system where the magnetic field can be applied normal to the sample surface; since, as a result, the microscope does not include eddy current damping, vibration isolation is achieved using a unique combination of two stages of pneumatic isolators along with an acoustical noise shield, in addition to the use of a highly stable as well as modular `Pan'-style STM design with a high Q factor. First 4.2 K results reveal, with clear atomic resolution, various reconstructions on wurtzite GaN c-plane surfaces grown by MBE, including the c(6x12) on N-polar GaN(0001). Details of the system design and functionality will be presented.

  18. Quantum Bio-Informatics IV

    NASA Astrophysics Data System (ADS)

    Accardi, Luigi; Freudenberg, Wolfgang; Ohya, Masanori

    2011-01-01

    The QP-DYN algorithms / L. Accardi, M. Regoli and M. Ohya -- Study of transcriptional regulatory network based on Cis module database / S. Akasaka ... [et al.] -- On Lie group-Lie algebra correspondences of unitary groups in finite von Neumann algebras / H. Ando, I. Ojima and Y. Matsuzawa -- On a general form of time operators of a Hamiltonian with purely discrete spectrum / A. Arai -- Quantum uncertainty and decision-making in game theory / M. Asano ... [et al.] -- New types of quantum entropies and additive information capacities / V. P. Belavkin -- Non-Markovian dynamics of quantum systems / D. Chruscinski and A. Kossakowski -- Self-collapses of quantum systems and brain activities / K.-H. Fichtner ... [et al.] -- Statistical analysis of random number generators / L. Accardi and M. Gabler -- Entangled effects of two consecutive pairs in residues and its use in alignment / T. Ham, K. Sato and M. Ohya -- The passage from digital to analogue in white noise analysis and applications / T. Hida -- Remarks on the degree of entanglement / D. Chruscinski ... [et al.] -- A completely discrete particle model derived from a stochastic partial differential equation by point systems / K.-H. Fichtner, K. Inoue and M. Ohya -- On quantum algorithm for exptime problem / S. Iriyama and M. Ohya -- On sufficient algebraic conditions for identification of quantum states / A. Jamiolkowski -- Concurrence and its estimations by entanglement witnesses / J. Jurkowski -- Classical wave model of quantum-like processing in brain / A. Khrennikov -- Entanglement mapping vs. quantum conditional probability operator / D. Chruscinski ... [et al.] -- Constructing multipartite entanglement witnesses / M. Michalski -- On Kadison-Schwarz property of quantum quadratic operators on M[symbol](C) / F. Mukhamedov and A. Abduganiev -- On phase transitions in quantum Markov chains on Cayley Tree / L. Accardi, F. Mukhamedov and M. Saburov -- Space(-time) emergence as symmetry breaking effect / I. Ojima

  19. A toolbox for developing bioinformatics software.

    PubMed

    Rother, Kristian; Potrzebowski, Wojciech; Puton, Tomasz; Rother, Magdalena; Wywial, Ewa; Bujnicki, Janusz M

    2012-03-01

    Creating useful software is a major activity of many scientists, including bioinformaticians. Nevertheless, software development in an academic setting is often unsystematic, which can lead to problems associated with maintenance and long-term availibility. Unfortunately, well-documented software development methodology is difficult to adopt, and technical measures that directly improve bioinformatic programming have not been described comprehensively. We have examined 22 software projects and have identified a set of practices for software development in an academic environment. We found them useful to plan a project, support the involvement of experts (e.g. experimentalists), and to promote higher quality and maintainability of the resulting programs. This article describes 12 techniques that facilitate a quick start into software engineering. We describe 3 of the 22 projects in detail and give many examples to illustrate the usage of particular techniques. We expect this toolbox to be useful for many bioinformatics programming projects and to the training of scientific programmers.

  20. Discovery and Classification of Bioinformatics Web Services

    SciTech Connect

    Rocco, D; Critchlow, T

    2002-09-02

    The transition of the World Wide Web from a paradigm of static Web pages to one of dynamic Web services provides new and exciting opportunities for bioinformatics with respect to data dissemination, transformation, and integration. However, the rapid growth of bioinformatics services, coupled with non-standardized interfaces, diminish the potential that these Web services offer. To face this challenge, we examine the notion of a Web service class that defines the functionality provided by a collection of interfaces. These descriptions are an integral part of a larger framework that can be used to discover, classify, and wrapWeb services automatically. We discuss how this framework can be used in the context of the proliferation of sites offering BLAST sequence alignment services for specialized data sets.

  1. A toolbox for developing bioinformatics software

    PubMed Central

    Potrzebowski, Wojciech; Puton, Tomasz; Rother, Magdalena; Wywial, Ewa; Bujnicki, Janusz M.

    2012-01-01

    Creating useful software is a major activity of many scientists, including bioinformaticians. Nevertheless, software development in an academic setting is often unsystematic, which can lead to problems associated with maintenance and long-term availibility. Unfortunately, well-documented software development methodology is difficult to adopt, and technical measures that directly improve bioinformatic programming have not been described comprehensively. We have examined 22 software projects and have identified a set of practices for software development in an academic environment. We found them useful to plan a project, support the involvement of experts (e.g. experimentalists), and to promote higher quality and maintainability of the resulting programs. This article describes 12 techniques that facilitate a quick start into software engineering. We describe 3 of the 22 projects in detail and give many examples to illustrate the usage of particular techniques. We expect this toolbox to be useful for many bioinformatics programming projects and to the training of scientific programmers. PMID:21803787

  2. Translational bioinformatics applications in genome medicine

    PubMed Central

    2009-01-01

    Although investigators using methodologies in bioinformatics have always been useful in genomic experimentation in analytic, engineering, and infrastructure support roles, only recently have bioinformaticians been able to have a primary scientific role in asking and answering questions on human health and disease. Here, I argue that this shift in role towards asking questions in medicine is now the next step needed for the field of bioinformatics. I outline four reasons why bioinformaticians are newly enabled to drive the questions in primary medical discovery: public availability of data, intersection of data across experiments, commoditization of methods, and streamlined validation. I also list four recommendations for bioinformaticians wishing to get more involved in translational research. PMID:19566916

  3. Genomics and Bioinformatics of Parkinson's Disease

    PubMed Central

    Scholz, Sonja W.; Mhyre, Tim; Ressom, Habtom; Shah, Salim; Federoff, Howard J.

    2012-01-01

    Within the last two decades, genomics and bioinformatics have profoundly impacted our understanding of the molecular mechanisms of Parkinson's disease (PD). From the description of the first PD gene in 1997 until today, we have witnessed the emergence of new technologies that have revolutionized our concepts to identify genetic mechanisms implicated in human health and disease. Driven by the publication of the human genome sequence and followed by the description of detailed maps for common genetic variability, novel applications to rapidly scrutinize the entire genome in a systematic, cost-effective manner have become a reality. As a consequence, about 30 genetic loci have been unequivocally linked to the pathogenesis of PD highlighting essential molecular pathways underlying this common disorder. Herein we discuss how neurogenomics and bioinformatics are applied to dissect the nature of this complex disease with the overall aim of developing rational therapeutic interventions. PMID:22762024

  4. Machine learning: an indispensable tool in bioinformatics.

    PubMed

    Inza, Iñaki; Calvo, Borja; Armañanzas, Rubén; Bengoetxea, Endika; Larrañaga, Pedro; Lozano, José A

    2010-01-01

    The increase in the number and complexity of biological databases has raised the need for modern and powerful data analysis tools and techniques. In order to fulfill these requirements, the machine learning discipline has become an everyday tool in bio-laboratories. The use of machine learning techniques has been extended to a wide spectrum of bioinformatics applications. It is broadly used to investigate the underlying mechanisms and interactions between biological molecules in many diseases, and it is an essential tool in any biomarker discovery process. In this chapter, we provide a basic taxonomy of machine learning algorithms, and the characteristics of main data preprocessing, supervised classification, and clustering techniques are shown. Feature selection, classifier evaluation, and two supervised classification topics that have a deep impact on current bioinformatics are presented. We make the interested reader aware of a set of popular web resources, open source software tools, and benchmarking data repositories that are frequently used by the machine learning community.

  5. A review of bioinformatic pipeline frameworks

    PubMed Central

    2017-01-01

    Abstract High-throughput bioinformatic analyses increasingly rely on pipeline frameworks to process sequence and metadata. Modern implementations of these frameworks differ on three key dimensions: using an implicit or explicit syntax, using a configuration, convention or class-based design paradigm and offering a command line or workbench interface. Here I survey and compare the design philosophies of several current pipeline frameworks. I provide practical recommendations based on analysis requirements and the user base. PMID:27013646

  6. [Applied problems of mathematical biology and bioinformatics].

    PubMed

    Lakhno, V D

    2011-01-01

    Mathematical biology and bioinformatics represent a new and rapidly progressing line of investigations which emerged in the course of work on the project "Human genome". The main applied problems of these sciences are grug design, patient-specific medicine and nanobioelectronics. It is shown that progress in the technology of mass sequencing of the human genome has set the stage for starting the national program on patient-specific medicine.

  7. A library-based bioinformatics services program*

    PubMed Central

    Yarfitz, Stuart; Ketchell, Debra S.

    2000-01-01

    Support for molecular biology researchers has been limited to traditional library resources and services in most academic health sciences libraries. The University of Washington Health Sciences Libraries have been providing specialized services to this user community since 1995. The library recruited a Ph.D. biologist to assess the molecular biological information needs of researchers and design strategies to enhance library resources and services. A survey of laboratory research groups identified areas of greatest need and led to the development of a three-pronged program: consultation, education, and resource development. Outcomes of this program include bioinformatics consultation services, library-based and graduate level courses, networking of sequence analysis tools, and a biological research Web site. Bioinformatics clients are drawn from diverse departments and include clinical researchers in need of tools that are not readily available outside of basic sciences laboratories. Evaluation and usage statistics indicate that researchers, regardless of departmental affiliation or position, require support to access molecular biology and genetics resources. Centralizing such services in the library is a natural synergy of interests and enhances the provision of traditional library resources. Successful implementation of a library-based bioinformatics program requires both subject-specific and library and information technology expertise. PMID:10658962

  8. Bringing Web 2.0 to bioinformatics

    PubMed Central

    Zhang, Zhang; Cheung, Kei-Hoi

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies. PMID:18842678

  9. Bringing Web 2.0 to bioinformatics.

    PubMed

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  10. Bioinformatics tools for analysing viral genomic data.

    PubMed

    Orton, R J; Gu, Q; Hughes, J; Maabar, M; Modha, S; Vattipally, S B; Wilkie, G S; Davison, A J

    2016-04-01

    The field of viral genomics and bioinformatics is experiencing a strong resurgence due to high-throughput sequencing (HTS) technology, which enables the rapid and cost-effective sequencing and subsequent assembly of large numbers of viral genomes. In addition, the unprecedented power of HTS technologies has enabled the analysis of intra-host viral diversity and quasispecies dynamics in relation to important biological questions on viral transmission, vaccine resistance and host jumping. HTS also enables the rapid identification of both known and potentially new viruses from field and clinical samples, thus adding new tools to the fields of viral discovery and metagenomics. Bioinformatics has been central to the rise of HTS applications because new algorithms and software tools are continually needed to process and analyse the large, complex datasets generated in this rapidly evolving area. In this paper, the authors give a brief overview of the main bioinformatics tools available for viral genomic research, with a particular emphasis on HTS technologies and their main applications. They summarise the major steps in various HTS analyses, starting with quality control of raw reads and encompassing activities ranging from consensus and de novo genome assembly to variant calling and metagenomics, as well as RNA sequencing.

  11. Bioinformatics Approach in Plant Genomic Research

    PubMed Central

    Ong, Quang; Nguyen, Phuc; Thao, Nguyen Phuong; Le, Ly

    2016-01-01

    The advance in genomics technology leads to the dramatic change in plant biology research. Plant biologists now easily access to enormous genomic data to deeply study plant high-density genetic variation at molecular level. Therefore, fully understanding and well manipulating bioinformatics tools to manage and analyze these data are essential in current plant genome research. Many plant genome databases have been established and continued expanding recently. Meanwhile, analytical methods based on bioinformatics are also well developed in many aspects of plant genomic research including comparative genomic analysis, phylogenomics and evolutionary analysis, and genome-wide association study. However, constantly upgrading in computational infrastructures, such as high capacity data storage and high performing analysis software, is the real challenge for plant genome research. This review paper focuses on challenges and opportunities which knowledge and skills in bioinformatics can bring to plant scientists in present plant genomics era as well as future aspects in critical need for effective tools to facilitate the translation of knowledge from new sequencing data to enhancement of plant productivity. PMID:27499685

  12. Bioinformatics strategies for the analysis of lipids.

    PubMed

    Wheelock, Craig E; Goto, Susumu; Yetukuri, Laxman; D'Alexandri, Fabio Luiz; Klukas, Christian; Schreiber, Falk; Oresic, Matej

    2009-01-01

    Owing to their importance in cellular physiology and pathology as well as to recent technological advances, the study of lipids has reemerged as a major research target. However, the structural diversity of lipids presents a number of analytical and informatics challenges. The field of lipidomics is a new postgenome discipline that aims to develop comprehensive methods for lipid analysis, necessitating concomitant developments in bioinformatics. The evolving research paradigm requires that new bioinformatics approaches accommodate genomic as well as high-level perspectives, integrating genome, protein, chemical and network information. The incorporation of lipidomics information into these data structures will provide mechanistic understanding of lipid functions and interactions in the context of cellular and organismal physiology. Accordingly, it is vital that specific bioinformatics methods be developed to analyze the wealth of lipid data being acquired. Herein, we present an overview of the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and application of its tools to the analysis of lipid data. We also describe a series of software tools and databases (KGML-ED, VANTED, MZmine, and LipidDB) that can be used for the processing of lipidomics data and biochemical pathway reconstruction, an important next step in the development of the lipidomics field.

  13. Bioinformatics on the Cloud Computing Platform Azure

    PubMed Central

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  14. Promoting synergistic research and education in genomics and bioinformatics

    PubMed Central

    2008-01-01

    Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology. High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and

  15. Promoting synergistic research and education in genomics and bioinformatics.

    PubMed

    Yang, Jack Y; Yang, Mary Qu; Zhu, Mengxia Michelle; Arabnia, Hamid R; Deng, Youping

    2008-01-01

    Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology.High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and

  16. Broader incorporation of bioinformatics in education: opportunities and challenges.

    PubMed

    Cummings, Michael P; Temple, Glena G

    2010-11-01

    The major opportunities for broader incorporation of bioinformatics in education can be placed into three general categories: general applicability of bioinformatics in life science and related curricula; inherent fit of bioinformatics for promoting student learning in most biology programs; and the general experience and associated comfort students have with computers and technology. Conversely, the major challenges for broader incorporation of bioinformatics in education can be placed into three general categories: required infrastructure and logistics; instructor knowledge of bioinformatics and continuing education; and the breadth of bioinformatics, and the diversity of students and educational objectives. Broader incorporation of bioinformatics at all education levels requires overcoming the challenges to using transformative computer-requiring learning activities, assisting faculty in collecting assessment data on mastery of student learning outcomes, as well as creating more faculty development opportunities that span diverse skill levels, with an emphasis placed on providing resource materials that are kept up-to-date as the field and tools change.

  17. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    PubMed

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    first used with a bioinformatics system. Simplifying the validation of essential tabular data files, such as sample metadata, will reduce common errors and thereby improve the quality and reliability of research outcomes.

  18. Sustainable urban built environment: Modern management concepts and evaluation methods

    NASA Astrophysics Data System (ADS)

    Ovsiannikova, Tatiana; Nikolaenko, Mariya

    2017-01-01

    The paper is focused on the analysis of modern concepts in urban development management. It is established that they are based on the principles of ecocentrism and anthropocentrism. The purpose of this research is to develop a system of quality indicators of urban built environment and justification of their application in management of city development. The need for observing the indicators characterizing the urban built environment in the planning of the territory development was proved. Based on the data and reports of the Russian and international organizations the analysis of the existing systems of urban development indicators is made. The suggested solution is to extend the existing indicators systems with that related to urban built environment quality which are recommended for planning urban areas development. The proposed system of indicators includes private, aggregate, normalized, and integrated urban built environment quality indicators using methods of economic-statistical and comparative analysis and index method. Application of these methods allowed calculating the indicators for urban areas of Tomsk Region. The results of calculations are presented in the paper. According to normalized indicators the priority areas for investment and development of urban areas were determined. The scenario conditions allowed estimating changes of quality indicators for urban built environment. Finally, the paper suggests recommendations for making management decisions when creating sustainable environment of life in urban areas.

  19. Teaching about the Built Environment. ERIC Digest.

    ERIC Educational Resources Information Center

    Graves, Ginny

    Critical thinking, responsible citizenship, cultural literacy, social relevancy; these concerns of educators in the social studies can be addressed through teaching and learning about the built environment. The tangible structures that humans have created (bridges, houses, factories, farms, monuments) constitute the built environment. Objects in…

  20. Built Environment Education in Art Education.

    ERIC Educational Resources Information Center

    Guilfoil, Joanne K., Ed.; Sandler, Alan R., Ed.

    This anthology brings the study of the built environment, its design, social and cultural functions, and the criticism thereof into focus. Following a preface and introduction, 22 essays are organized in three parts. Part 1 includes: (1) "Landscape Art and the Role of the Natural Environment in Built Environment Education" (Heather…

  1. Built Environment Education in Art Education.

    ERIC Educational Resources Information Center

    Guilfoil, Joanne K., Ed.; Sandler, Alan R., Ed.

    This anthology brings the study of the built environment, its design, social and cultural functions, and the criticism thereof into focus. Following a preface and introduction, 22 essays are organized in three parts. Part 1 includes: (1) "Landscape Art and the Role of the Natural Environment in Built Environment Education" (Heather…

  2. Bioinformatics in microbial biotechnology--a mini review.

    PubMed

    Bansal, Arvind K

    2005-06-28

    regulatory pathways, the development of statistical techniques, clustering techniques and data mining techniques to derive protein-protein and protein-DNA interactions, and modeling of 3D structure of proteins and 3D docking between proteins and biochemicals for rational drug design, difference analysis between pathogenic and non-pathogenic strains to identify candidate genes for vaccines and anti-microbial agents, and the whole genome comparison to understand the microbial evolution. The development of bioinformatics techniques has enhanced the pace of biological discovery by automated analysis of large number of microbial genomes. We are on the verge of using all this knowledge to understand cellular mechanisms at the systemic level. The developed bioinformatics techniques have potential to facilitate (i) the discovery of causes of diseases, (ii) vaccine and rational drug design, and (iii) improved cost effective agents for bioremediation by pruning out the dead ends. Despite the fast paced global effort, the current analysis is limited by the lack of available gene-functionality from the wet-lab data, the lack of computer algorithms to explore vast amount of data with unknown functionality, limited availability of protein-protein and protein-DNA interactions, and the lack of knowledge of temporal and transient behavior of genes and pathways.

  3. A comparison of common programming languages used in bioinformatics

    PubMed Central

    Fourment, Mathieu; Gillings, Michael R

    2008-01-01

    Background The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Results Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from Conclusion This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language. PMID:18251993

  4. Prediction and Analysis of Key Genes in Glioblastoma Based on Bioinformatics

    PubMed Central

    Long, Hao; Liang, Chaofeng; Zhang, Xi'an; Fang, Luxiong; Wang, Gang; Qi, Songtao

    2017-01-01

    Understanding the mechanisms of glioblastoma at the molecular and structural level is not only interesting for basic science but also valuable for biotechnological application, such as the clinical treatment. In the present study, bioinformatics analysis was performed to reveal and identify the key genes of glioblastoma multiforme (GBM). The results obtained in the present study signified the importance of some genes, such as COL3A1, FN1, and MMP9, for glioblastoma. Based on the selected genes, a prediction model was built, which achieved 94.4% prediction accuracy. These findings might provide more insights into the genetic basis of glioblastoma. PMID:28191466

  5. Gene expression analysis of colorectal cancer by bioinformatics strategy.

    PubMed

    Cui, Meng; Yuan, Junhua; Li, Jun; Sun, Bing; Li, Tao; Li, Yuantao; Wu, Guoliang

    2014-10-01

    We used bioinformatics technology to analyze gene expression profiles involved in colorectal cancer tissue samples and healthy controls. In this paper, we downloaded the gene expression profile GSE4107 from Gene Expression Omnibus (GEO) database, in which a total of 22 chips were available, including normal colonic mucosa tissue from normal healthy donors (n=10), colorectal cancer tissue samples from colorectal patients (n=33). To further understand the biological functions of the screened DGEs, the KEGG pathway enrichment analysis were conducted. Then we built a transcriptome network to study differentially co-expressed links. A total of 3151 DEGs of CRC were selected. Besides, total 164 DCGs (Differentially Coexpressed Gene, DCG) and 29279 DCLs (Differentially Co-expressed Link, DCL) were obtained. Furthermore, the significantly enriched KEGG pathways were Endocytosis, Calcium signaling pathway, Vascular smooth muscle contraction, Linoleic acid metabolism, Arginine and proline metabolism, Inositol phosphate metabolism and MAPK signaling pathway. Our results show that the generation of CRC involves multiple genes, TFs and pathways. Several signal and immune pathways are linked to CRC and give us more clues in the process of CRC. Hence, our work would pave ways for novel diagnosis of CRC, and provided theoretical guidance into cancer therapy.

  6. What is bioinformatics? A proposed definition and overview of the field.

    PubMed

    Luscombe, N M; Greenbaum, D; Gerstein, M

    2001-01-01

    The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems. Our definition is as follows: Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Analyses in bioinformatics predominantly focus on three types of large datasets available in molecular biology: macromolecular structures, genome sequences, and the results of functional genomics experiments (e.g. expression data). Additional information includes the text of scientific papers and "relationship data" from metabolic pathways, taxonomy trees, and protein-protein interaction networks. Bioinformatics employs a wide range of computational techniques including sequence and structural alignment, database design and data mining, macromolecular geometry, phylogenetic tree construction, prediction of protein structure and function, gene finding, and expression data clustering. The emphasis is on approaches integrating a variety of computational methods and heterogeneous data sources. Finally, bioinformatics is a practical discipline. We survey some representative applications, such as finding homologues, designing drugs, and performing large-scale censuses. Additional information pertinent to the review is available over the web at http://bioinfo.mbb.yale.edu/what-is-it.

  7. Teaching the bioinformatics of signaling networks: an integrated approach to facilitate multi-disciplinary learning.

    PubMed

    Korcsmaros, Tamas; Dunai, Zsuzsanna A; Vellai, Tibor; Csermely, Peter

    2013-09-01

    The number of bioinformatics tools and resources that support molecular and cell biology approaches is continuously expanding. Moreover, systems and network biology analyses are accompanied more and more by integrated bioinformatics methods. Traditional information-centered university teaching methods often fail, as (1) it is impossible to cover all existing approaches in the frame of a single course, and (2) a large segment of the current bioinformation can become obsolete in a few years. Signaling network offers an excellent example for teaching bioinformatics resources and tools, as it is both focused and complex at the same time. Here, we present an outline of a university bioinformatics course with four sample practices to demonstrate how signaling network studies can integrate biochemistry, genetics, cell biology and network sciences. We show that several bioinformatics resources and tools, as well as important concepts and current trends, can also be integrated to signaling network studies. The research-type hands-on experiences we show enable the students to improve key competences such as teamworking, creative and critical thinking and problem solving. Our classroom course curriculum can be re-formulated as an e-learning material or applied as a part of a specific training course. The multi-disciplinary approach and the mosaic setup of the course have the additional benefit to support the advanced teaching of talented students.

  8. A scalable neuristor built with Mott memristors

    NASA Astrophysics Data System (ADS)

    Pickett, Matthew D.; Medeiros-Ribeiro, Gilberto; Williams, R. Stanley

    2013-02-01

    The Hodgkin-Huxley model for action potential generation in biological axons is central for understanding the computational capability of the nervous system and emulating its functionality. Owing to the historical success of silicon complementary metal-oxide-semiconductors, spike-based computing is primarily confined to software simulations and specialized analogue metal-oxide-semiconductor field-effect transistor circuits. However, there is interest in constructing physical systems that emulate biological functionality more directly, with the goal of improving efficiency and scale. The neuristor was proposed as an electronic device with properties similar to the Hodgkin-Huxley axon, but previous implementations were not scalable. Here we demonstrate a neuristor built using two nanoscale Mott memristors, dynamical devices that exhibit transient memory and negative differential resistance arising from an insulating-to-conducting phase transition driven by Joule heating. This neuristor exhibits the important neural functions of all-or-nothing spiking with signal gain and diverse periodic spiking, using materials and structures that are amenable to extremely high-density integration with or without silicon transistors.

  9. The Built Environment Is a Microbial Wasteland

    PubMed Central

    2016-01-01

    ABSTRACT Humanity’s transition from the outdoor environment to the built environment (BE) has reduced our exposure to microbial diversity. The relative importance of factors that contribute to the composition of human-dominated BE microbial communities remains largely unknown. In their article in this issue, Chase and colleagues (J. Chase, J. Fouquier, M. Zare, D. L. Sonderegger, R. Knight, S. T. Kelley, J. Siegel, and J. G. Caporaso, mSystems 1(2):e00022-16, 2016, http://dx.doi.org/10.1128/mSystems.00022-16) present an office building study in which they controlled for environmental factors, geography, surface material, sampling location, and human interaction type. They found that surface location and geography were the strongest factors contributing to microbial community structure, while surface material had little effect. Even in the absence of direct human interaction, BE surfaces were composed of 25 to 30% human skin-associated taxa. The authors demonstrate how technical variation across sequencing runs is a major issue, especially in BE work, where the biomass is often low and the potential for PCR contaminants is high. Overall, the authors conclude that BE surfaces are desert-like environments where microbes passively accumulate. PMID:27832216

  10. Modern bioinformatics meets traditional Chinese medicine.

    PubMed

    Gu, Peiqin; Chen, Huajun

    2014-11-01

    Traditional Chinese medicine (TCM) is gaining increasing attention with the emergence of integrative medicine and personalized medicine, characterized by pattern differentiation on individual variance and treatments based on natural herbal synergism. Investigating the effectiveness and safety of the potential mechanisms of TCM and the combination principles of drug therapies will bridge the cultural gap with Western medicine and improve the development of integrative medicine. Dealing with rapidly growing amounts of biomedical data and their heterogeneous nature are two important tasks among modern biomedical communities. Bioinformatics, as an emerging interdisciplinary field of computer science and biology, has become a useful tool for easing the data deluge pressure by automating the computation processes with informatics methods. Using these methods to retrieve, store and analyze the biomedical data can effectively reveal the associated knowledge hidden in the data, and thus promote the discovery of integrated information. Recently, these techniques of bioinformatics have been used for facilitating the interactional effects of both Western medicine and TCM. The analysis of TCM data using computational technologies provides biological evidence for the basic understanding of TCM mechanisms, safety and efficacy of TCM treatments. At the same time, the carrier and targets associated with TCM remedies can inspire the rethinking of modern drug development. This review summarizes the significant achievements of applying bioinformatics techniques to many aspects of the research in TCM, such as analysis of TCM-related '-omics' data and techniques for analyzing biological processes and pharmaceutical mechanisms of TCM, which have shown certain potential of bringing new thoughts to both sides. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  11. Translational Bioinformatics Approaches to Drug Development

    PubMed Central

    Readhead, Ben; Dudley, Joel

    2013-01-01

    Significance A majority of therapeutic interventions occur late in the pathological process, when treatment outcome can be less predictable and effective, highlighting the need for new precise and preventive therapeutic development strategies that consider genomic and environmental context. Translational bioinformatics is well positioned to contribute to the many challenges inherent in bridging this gap between our current reactive methods of healthcare delivery and the intent of precision medicine, particularly in the areas of drug development, which forms the focus of this review. Recent Advances A variety of powerful informatics methods for organizing and leveraging the vast wealth of available molecular measurements available for a broad range of disease contexts have recently emerged. These include methods for data driven disease classification, drug repositioning, identification of disease biomarkers, and the creation of disease network models, each with significant impacts on drug development approaches. Critical Issues An important bottleneck in the application of bioinformatics methods in translational research is the lack of investigators who are versed in both biomedical domains and informatics. Efforts to nurture both sets of competencies within individuals and to increase interfield visibility will help to accelerate the adoption and increased application of bioinformatics in translational research. Future Directions It is possible to construct predictive, multiscale network models of disease by integrating genotype, gene expression, clinical traits, and other multiscale measures using causal network inference methods. This can enable the identification of the “key drivers” of pathology, which may represent novel therapeutic targets or biomarker candidates that play a more direct role in the etiology of disease. PMID:24527359

  12. 2016 update on APBioNet's annual international conference on bioinformatics (InCoB).

    PubMed

    Schönbach, Christian; Verma, Chandra; Wee, Lawrence Jin Kiat; Bond, Peter John; Ranganathan, Shoba

    2016-12-22

    InCoB became since its inception in 2002 one of the largest annual bioinformatics conferences in the Asia-Pacific region with attendance ranging between 150 and 250 delegates depending on the venue location. InCoB 2016 in Singapore was attended by almost 220 delegates. This year, sessions on structural bioinformatics, sequence and sequencing, and next-generation sequencing fielded the highest number of oral presentation. Forty-four out 96 oral presentations were associated with an accepted manuscript in supplemental issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics or BMC Systems Biology. Articles with a genomics focus are reviewed in this editorial. Next year's InCoB will be held in Shenzen, China from September 20 to 22, 2017.

  13. Critical Issues in Bioinformatics and Computing

    PubMed Central

    Kesh, Someswa; Raghupathi, Wullianallur

    2004-01-01

    This article provides an overview of the field of bioinformatics and its implications for the various participants. Next-generation issues facing developers (programmers), users (molecular biologists), and the general public (patients) who would benefit from the potential applications are identified. The goal is to create awareness and debate on the opportunities (such as career paths) and the challenges such as privacy that arise. A triad model of the participants' roles and responsibilities is presented along with the identification of the challenges and possible solutions. PMID:18066389

  14. Translational Bioinformatics: Past, Present, and Future

    PubMed Central

    Tenenbaum, Jessica D.

    2016-01-01

    Though a relatively young discipline, translational bioinformatics (TBI) has become a key component of biomedical research in the era of precision medicine. Development of high-throughput technologies and electronic health records has caused a paradigm shift in both healthcare and biomedical research. Novel tools and methods are required to convert increasingly voluminous datasets into information and actionable knowledge. This review provides a definition and contextualization of the term TBI, describes the discipline’s brief history and past accomplishments, as well as current foci, and concludes with predictions of future directions in the field. PMID:26876718

  15. Microbial bioinformatics for food safety and production

    PubMed Central

    Alkema, Wynand; Boekhorst, Jos; Wels, Michiel

    2016-01-01

    In the production of fermented foods, microbes play an important role. Optimization of fermentation processes or starter culture production traditionally was a trial-and-error approach inspired by expert knowledge of the fermentation process. Current developments in high-throughput ‘omics’ technologies allow developing more rational approaches to improve fermentation processes both from the food functionality as well as from the food safety perspective. Here, the authors thematically review typical bioinformatics techniques and approaches to improve various aspects of the microbial production of fermented food products and food safety. PMID:26082168

  16. Microbial bioinformatics for food safety and production.

    PubMed

    Alkema, Wynand; Boekhorst, Jos; Wels, Michiel; van Hijum, Sacha A F T

    2016-03-01

    In the production of fermented foods, microbes play an important role. Optimization of fermentation processes or starter culture production traditionally was a trial-and-error approach inspired by expert knowledge of the fermentation process. Current developments in high-throughput 'omics' technologies allow developing more rational approaches to improve fermentation processes both from the food functionality as well as from the food safety perspective. Here, the authors thematically review typical bioinformatics techniques and approaches to improve various aspects of the microbial production of fermented food products and food safety.

  17. Multiobjective optimization in bioinformatics and computational biology.

    PubMed

    Handl, Julia; Kell, Douglas B; Knowles, Joshua

    2007-01-01

    This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.

  18. Bioinformatics for the synthetic biology of natural products: integrating across the Design–Build–Test cycle

    PubMed Central

    Currin, Andrew; Jervis, Adrian J.; Rattray, Nicholas J. W.; Swainston, Neil; Yan, Cunyu; Breitling, Rainer

    2016-01-01

    Covering: 2000 to 2016 Progress in synthetic biology is enabled by powerful bioinformatics tools allowing the integration of the design, build and test stages of the biological engineering cycle. In this review we illustrate how this integration can be achieved, with a particular focus on natural products discovery and production. Bioinformatics tools for the DESIGN and BUILD stages include tools for the selection, synthesis, assembly and optimization of parts (enzymes and regulatory elements), devices (pathways) and systems (chassis). TEST tools include those for screening, identification and quantification of metabolites for rapid prototyping. The main advantages and limitations of these tools as well as their interoperability capabilities are highlighted. PMID:27185383

  19. Biogem: an effective tool-based approach for scaling up open source software development in bioinformatics.

    PubMed

    Bonnal, Raoul J P; Aerts, Jan; Githinji, George; Goto, Naohisa; MacLean, Dan; Miller, Chase A; Mishima, Hiroyuki; Pagani, Massimiliano; Ramirez-Gonzalez, Ricardo; Smant, Geert; Strozzi, Francesco; Syme, Rob; Vos, Rutger; Wennblom, Trevor J; Woodcroft, Ben J; Katayama, Toshiaki; Prins, Pjotr

    2012-04-01

    Biogem provides a software development environment for the Ruby programming language, which encourages community-based software development for bioinformatics while lowering the barrier to entry and encouraging best practices. Biogem, with its targeted modular and decentralized approach, software generator, tools and tight web integration, is an improved general model for scaling up collaborative open source software development in bioinformatics. Biogem and modules are free and are OSS. Biogem runs on all systems that support recent versions of Ruby, including Linux, Mac OS X and Windows. Further information at http://www.biogems.info. A tutorial is available at http://www.biogems.info/howto.html bonnal@ingm.org.

  20. A novel tool for assessing and summarizing the built environment

    PubMed Central

    2012-01-01

    Background A growing corpus of research focuses on assessing the quality of the local built environment and also examining the relationship between the built environment and health outcomes and indicators in communities. However, there is a lack of research presenting a highly resolved, systematic, and comprehensive spatial approach to assessing the built environment over a large geographic extent. In this paper, we contribute to the built environment literature by describing a tool used to assess the residential built environment at the tax parcel-level, as well as a methodology for summarizing the data into meaningful indices for linkages with health data. Methods A database containing residential built environment variables was constructed using the existing body of literature, as well as input from local community partners. During the summer of 2008, a team of trained assessors conducted an on-foot, curb-side assessment of approximately 17,000 tax parcels in Durham, North Carolina, evaluating the built environment on over 80 variables using handheld Global Positioning System (GPS) devices. The exercise was repeated again in the summer of 2011 over a larger geographic area that included roughly 30,700 tax parcels; summary data presented here are from the 2008 assessment. Results Built environment data were combined with Durham crime data and tax assessor data in order to construct seven built environment indices. These indices were aggregated to US Census blocks, as well as to primary adjacency communities (PACs) and secondary adjacency communities (SACs) which better described the larger neighborhood context experienced by local residents. Results were disseminated to community members, public health professionals, and government officials. Conclusions The assessment tool described is both easily-replicable and comprehensive in design. Furthermore, our construction of PACs and SACs introduces a novel concept to approximate varying scales of community and

  1. Principles of As-Built Engineering

    SciTech Connect

    Dolin, R.M.; Hefele, J.

    1996-11-01

    As-Built Engineering is a product realization methodology founded on the notion that life-cycle engineering should be based on what is actually produced and not on what is nominally designed. As-Built Engineering is a way of thinking about the production realization process that enables customization in mass production environments. It questions the relevance of nominal based methods of engineering and the role that tolerancing plays in product realization. As-Built Engineering recognizes that there will always be errors associated with manufacturing that cannot be controlled and therefore need to be captured in order to fully characterize each individual product`s unique attributes. One benefit of As-Built Engineering is the ability to provide actual product information to designers and analysts enabling them to verify their assumptions using actual part and assembly data. Another benefit is the ability to optimize new and re-engineered assemblies.

  2. Built Environment Analysis Tool: April 2013

    SciTech Connect

    Porter, C.

    2013-05-01

    This documentation describes the tool development. It was created to evaluate the effects of built environment scenarios on transportation energy and greenhouse gas (GHG) emissions. This documentation also provides guidance on how to apply the tool.

  3. Discovery of novel xylosides in co-culture of basidiomycetes Trametes versicolor and Ganoderma applanatum by integrated metabolomics and bioinformatics

    NASA Astrophysics Data System (ADS)

    Yao, Lu; Zhu, Li-Ping; Xu, Xiao-Yan; Tan, Ling-Ling; Sadilek, Martin; Fan, Huan; Hu, Bo; Shen, Xiao-Ting; Yang, Jie; Qiao, Bin; Yang, Song

    2016-09-01

    Transcriptomic analysis of cultured fungi suggests that many genes for secondary metabolite synthesis are presumably silent under standard laboratory condition. In order to investigate the expression of silent genes in symbiotic systems, 136 fungi-fungi symbiotic systems were built up by co-culturing seventeen basidiomycetes, among which the co-culture of Trametes versicolor and Ganoderma applanatum demonstrated the strongest coloration of confrontation zones. Metabolomics study of this co-culture discovered that sixty-two features were either newly synthesized or highly produced in the co-culture compared with individual cultures. Molecular network analysis highlighted a subnetwork including two novel xylosides (compounds 2 and 3). Compound 2 was further identified as N-(4-methoxyphenyl)formamide 2-O-β-D-xyloside and was revealed to have the potential to enhance the cell viability of human immortalized bronchial epithelial cell line of Beas-2B. Moreover, bioinformatics and transcriptional analysis of T. versicolor revealed a potential candidate gene (GI: 636605689) encoding xylosyltransferases for xylosylation. Additionally, 3-phenyllactic acid and orsellinic acid were detected for the first time in G. applanatum, which may be ascribed to response against T.versicolor stress. In general, the described co-culture platform provides a powerful tool to discover novel metabolites and help gain insights into the mechanism of silent gene activation in fungal defense.

  4. Discovery of novel xylosides in co-culture of basidiomycetes Trametes versicolor and Ganoderma applanatum by integrated metabolomics and bioinformatics

    PubMed Central

    Yao, Lu; Zhu, Li-Ping; Xu, Xiao-Yan; Tan, Ling-Ling; Sadilek, Martin; Fan, Huan; Hu, Bo; Shen, Xiao-Ting; Yang, Jie; Qiao, Bin; Yang, Song

    2016-01-01

    Transcriptomic analysis of cultured fungi suggests that many genes for secondary metabolite synthesis are presumably silent under standard laboratory condition. In order to investigate the expression of silent genes in symbiotic systems, 136 fungi-fungi symbiotic systems were built up by co-culturing seventeen basidiomycetes, among which the co-culture of Trametes versicolor and Ganoderma applanatum demonstrated the strongest coloration of confrontation zones. Metabolomics study of this co-culture discovered that sixty-two features were either newly synthesized or highly produced in the co-culture compared with individual cultures. Molecular network analysis highlighted a subnetwork including two novel xylosides (compounds 2 and 3). Compound 2 was further identified as N-(4-methoxyphenyl)formamide 2-O-β-D-xyloside and was revealed to have the potential to enhance the cell viability of human immortalized bronchial epithelial cell line of Beas-2B. Moreover, bioinformatics and transcriptional analysis of T. versicolor revealed a potential candidate gene (GI: 636605689) encoding xylosyltransferases for xylosylation. Additionally, 3-phenyllactic acid and orsellinic acid were detected for the first time in G. applanatum, which may be ascribed to response against T.versicolor stress. In general, the described co-culture platform provides a powerful tool to discover novel metabolites and help gain insights into the mechanism of silent gene activation in fungal defense. PMID:27616058

  5. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    ERIC Educational Resources Information Center

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  6. Teaching the ABCs of bioinformatics: a brief introduction to the Applied Bioinformatics Course

    PubMed Central

    2014-01-01

    With the development of the Internet and the growth of online resources, bioinformatics training for wet-lab biologists became necessary as a part of their education. This article describes a one-semester course ‘Applied Bioinformatics Course’ (ABC, http://abc.cbi.pku.edu.cn/) that the author has been teaching to biological graduate students at the Peking University and the Chinese Academy of Agricultural Sciences for the past 13 years. ABC is a hands-on practical course to teach students to use online bioinformatics resources to solve biological problems related to their ongoing research projects in molecular biology. With a brief introduction to the background of the course, detailed information about the teaching strategies of the course are outlined in the ‘How to teach’ section. The contents of the course are briefly described in the ‘What to teach’ section with some real examples. The author wishes to share his teaching experiences and the online teaching materials with colleagues working in bioinformatics education both in local and international universities. PMID:24008274

  7. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    ERIC Educational Resources Information Center

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  8. The Built Environment Predicts Observed Physical Activity

    PubMed Central

    Kelly, Cheryl; Wilson, Jeffrey S.; Schootman, Mario; Clennin, Morgan; Baker, Elizabeth A.; Miller, Douglas K.

    2014-01-01

    Background: In order to improve our understanding of the relationship between the built environment and physical activity, it is important to identify associations between specific geographic characteristics and physical activity behaviors. Purpose: Examine relationships between observed physical activity behavior and measures of the built environment collected on 291 street segments in Indianapolis and St. Louis. Methods: Street segments were selected using a stratified geographic sampling design to ensure representation of neighborhoods with different land use and socioeconomic characteristics. Characteristics of the built environment on-street segments were audited using two methods: in-person field audits and audits based on interpretation of Google Street View imagery with each method blinded to results from the other. Segments were dichotomized as having a particular characteristic (e.g., sidewalk present or not) based on the two auditing methods separately. Counts of individuals engaged in different forms of physical activity on each segment were assessed using direct observation. Non-parametric statistics were used to compare counts of physically active individuals on each segment with built environment characteristic. Results: Counts of individuals engaged in physical activity were significantly higher on segments with mixed land use or all non-residential land use, and on segments with pedestrian infrastructure (e.g., crosswalks and sidewalks) and public transit. Conclusion: Several micro-level built environment characteristics were associated with physical activity. These data provide support for theories that suggest changing the built environment and related policies may encourage more physical activity. PMID:24904916

  9. Databases and Bioinformatics Tools for the Study of DNA Repair

    PubMed Central

    Milanowska, Kaja; Rother, Kristian; Bujnicki, Janusz M.

    2011-01-01

    DNA is continuously exposed to many different damaging agents such as environmental chemicals, UV light, ionizing radiation, and reactive cellular metabolites. DNA lesions can result in different phenotypical consequences ranging from a number of diseases, including cancer, to cellular malfunction, cell death, or aging. To counteract the deleterious effects of DNA damage, cells have developed various repair systems, including biochemical pathways responsible for the removal of single-strand lesions such as base excision repair (BER) and nucleotide excision repair (NER) or specialized polymerases temporarily taking over lesion-arrested DNA polymerases during the S phase in translesion synthesis (TLS). There are also other mechanisms of DNA repair such as homologous recombination repair (HRR), nonhomologous end-joining repair (NHEJ), or DNA damage response system (DDR). This paper reviews bioinformatics resources specialized in disseminating information about DNA repair pathways, proteins involved in repair mechanisms, damaging agents, and DNA lesions. PMID:22091405

  10. Using Built Environmental Observation Tools: Comparing Two Methods of Creating a Measure of the Built Environment

    PubMed Central

    Keast, Erin M.; Carlson, Nichole E.; Chapman, Nancy J.; Michael, Yvonne L.

    2011-01-01

    Purpose Identify an efficient method of creating a comprehensive and concise measure of the built environment integrating data from geographic information systems (GIS) and the Senior Walking Environmental Assessment Tool (SWEAT). Design Cross-sectional study using a population sample. Setting Eight municipally defined neighborhoods in Portland, Oregon. Subjects Adult residents (N = 120) of audited segments (N = 363). Measures We described built environmental features using SWEAT audits and GIS data. We obtained information on walking behaviors and potential confounders through in-person interviews. Analysis We created two sets of environviental measures, one based on the conceptual framework used to develop SWEAT and another using principal component analysis (PCA). Each measure’s association with walking for transportation and exercise was then assessed and compared using logistic regression. Results A priori measures (destinations, safety, aesthetics, and functionality) and PCA measures (accessibility, comfort/safety, maintenance, and pleasantness) were analogous in conceptual meaning and had similar associations with walking. Walking for transportation was associated with destination accessibility and functional elements, whereas walking for exercise was associated with maintenance of the walking area and protection from traffic. However, only PCA measures consistently reached statistical significance. Conclusion The measures created with PCA were more parsimonious than those created a priori. Performing PCA is an efficient method of combining and scoring SWEAT and GIS data. PMID:20465151

  11. Bioinformatics for cancer immunology and immunotherapy.

    PubMed

    Charoentong, Pornpimol; Angelova, Mihaela; Efremova, Mirjana; Gallasch, Ralf; Hackl, Hubert; Galon, Jerome; Trajanoski, Zlatko

    2012-11-01

    Recent mechanistic insights obtained from preclinical studies and the approval of the first immunotherapies has motivated increasing number of academic investigators and pharmaceutical/biotech companies to further elucidate the role of immunity in tumor pathogenesis and to reconsider the role of immunotherapy. Additionally, technological advances (e.g., next-generation sequencing) are providing unprecedented opportunities to draw a comprehensive picture of the tumor genomics landscape and ultimately enable individualized treatment. However, the increasing complexity of the generated data and the plethora of bioinformatics methods and tools pose considerable challenges to both tumor immunologists and clinical oncologists. In this review, we describe current concepts and future challenges for the management and analysis of data for cancer immunology and immunotherapy. We first highlight publicly available databases with specific focus on cancer immunology including databases for somatic mutations and epitope databases. We then give an overview of the bioinformatics methods for the analysis of next-generation sequencing data (whole-genome and exome sequencing), epitope prediction tools as well as methods for integrative data analysis and network modeling. Mathematical models are powerful tools that can predict and explain important patterns in the genetic and clinical progression of cancer. Therefore, a survey of mathematical models for tumor evolution and tumor-immune cell interaction is included. Finally, we discuss future challenges for individualized immunotherapy and suggest how a combined computational/experimental approaches can lead to new insights into the molecular mechanisms of cancer, improved diagnosis, and prognosis of the disease and pinpoint novel therapeutic targets.

  12. OpenHelix: bioinformatics education outside of a different box

    PubMed Central

    Mangan, Mary E.; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C.

    2010-01-01

    The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review. PMID:20798181

  13. OpenHelix: bioinformatics education outside of a different box.

    PubMed

    Williams, Jennifer M; Mangan, Mary E; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C

    2010-11-01

    The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review.

  14. Translational bioinformatics: linking the molecular world to the clinical world.

    PubMed

    Altman, R B

    2012-06-01

    Translational bioinformatics represents the union of translational medicine and bioinformatics. Translational medicine moves basic biological discoveries from the research bench into the patient-care setting and uses clinical observations to inform basic biology. It focuses on patient care, including the creation of new diagnostics, prognostics, prevention strategies, and therapies based on biological discoveries. Bioinformatics involves algorithms to represent, store, and analyze basic biological data, including DNA sequence, RNA expression, and protein and small-molecule abundance within cells. Translational bioinformatics spans these two fields; it involves the development of algorithms to analyze basic molecular and cellular data with an explicit goal of affecting clinical care.

  15. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  16. An Interactive Multimedia Learning Environment for VLSI Built with COSMOS

    ERIC Educational Resources Information Center

    Angelides, Marios C.; Agius, Harry W.

    2002-01-01

    This paper presents Bigger Bits, an interactive multimedia learning environment that teaches students about VLSI within the context of computer electronics. The system was built with COSMOS (Content Oriented semantic Modelling Overlay Scheme), which is a modelling scheme that we developed for enabling the semantic content of multimedia to be used…

  17. The Foundations of Lifelong Health Are Built in Early Childhood

    ERIC Educational Resources Information Center

    National Forum on Early Childhood Policy and Programs, 2010

    2010-01-01

    A vital and productive society with a prosperous and sustainable future is built on a foundation of healthy child development. Health in the earliest years--beginning with the future mother's well-being before she becomes pregnant--lays the groundwork for a lifetime of vitality. When developing biological systems are strengthened by positive early…

  18. Improvement in the determination of HIV-1 tropism using the V3 gene sequence and a combination of bioinformatic tools.

    PubMed

    Chueca, Natalia; Garrido, Carolina; Alvarez, Marta; Poveda, Eva; de Dios Luna, Juan; Zahonero, Natalia; Hernández-Quero, José; Soriano, Vicente; Maroto, Carmen; de Mendoza, Carmen; García, Federico

    2009-05-01

    Assessment of HIV tropism using bioinformatic tools based on V3 sequences correlates poorly with results provided by phenotypic tropism assays, particularly for recognizing X4 viruses. This may represent an obstacle for the use of CCR5 antagonists. An algorithm combining several bioinformatic tools might improve the correlation with phenotypic tropism results. A total of 200 V3 sequences from HIV-1 subtype B, available in several databases with known phenotypic tropism results, were used to evaluate the sensitivity and specificity of seven different bioinformatic tools (PSSM, SVM, C4.5 decision tree generator and C4.5, PART, Charge Rule, and Geno2pheno). The best predictive bioinformatic tools were identified, and a model combining several of these was built. Using the 200 reference sequences, SVM and geno2-pheno showed the highest sensitivity for detecting X4 viruses (98.8% and 93.7%, respectively); however, their specificity was relatively low (62.5% and 86.6%, respectively). For R5 viruses, PSSM and C4.5 gave the same results and outperformed other bioinformatic tools (95.7% sensitivity, 82% specificity). When results from three out of these four tools were concordant, the sensitivity and specificity, taking as reference the results from phenotypic tropism assays, were over 90% in predicting either R5 or X4 viruses (AUC: 0.9701; 95% CI: 0.9358-0.9889). An algorithm combining four distinct bioinformatic tools (SVM, geno2pheno, PSSM and C4.5), improves the genotypic prediction of HIV tropism, and merits further evaluation, as it might prove useful as a screening strategy in clinical practice. Copyright 2009 Wiley-Liss, Inc.

  19. Probing the diversity of healthy oral microbiome with bioinformatics approaches

    PubMed Central

    Moon, Ji-Hoi; Lee, Jae-Hyung

    2016-01-01

    The human oral cavity contains a highly personalized microbiome essential to maintaining health, but capable of causing oral and systemic diseases. Thus, an in-depth definition of “healthy oral microbiome” is critical to understanding variations in disease states from preclinical conditions, and disease onset through progressive states of disease. With rapid advances in DNA sequencing and analytical technologies, population-based studies have documented the range and diversity of both taxonomic compositions and functional potentials observed in the oral microbiome in healthy individuals. Besides factors specific to the host, such as age and race/ethnicity, environmental factors also appear to contribute to the variability of the healthy oral microbiome. Here, we review bioinformatic techniques for metagenomic datasets, including their strengths and limitations. In addition, we summarize the interpersonal and intrapersonal diversity of the oral microbiome, taking into consideration the recent large-scale and longitudinal studies, including the Human Microbiome Project. PMID:27697111

  20. Bioinformatic and biometric methods in plant morphology1

    PubMed Central

    Punyasena, Surangi W.; Smith, Selena Y.

    2014-01-01

    Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.

  1. Bioinformatics and the Politics of Innovation in the Life Sciences

    PubMed Central

    Zhou, Yinhua; Datta, Saheli; Salter, Charlotte

    2016-01-01

    The governments of China, India, and the United Kingdom are unanimous in their belief that bioinformatics should supply the link between basic life sciences research and its translation into health benefits for the population and the economy. Yet at the same time, as ambitious states vying for position in the future global bioeconomy they differ considerably in the strategies adopted in pursuit of this goal. At the heart of these differences lies the interaction between epistemic change within the scientific community itself and the apparatus of the state. Drawing on desk-based research and thirty-two interviews with scientists and policy makers in the three countries, this article analyzes the politics that shape this interaction. From this analysis emerges an understanding of the variable capacities of different kinds of states and political systems to work with science in harnessing the potential of new epistemic territories in global life sciences innovation. PMID:27546935

  2. Probing the diversity of healthy oral microbiome with bioinformatics approaches.

    PubMed

    Moon, Ji-Hoi; Lee, Jae-Hyung

    2016-12-01

    The human oral cavity contains a highly personalized microbiome essential to maintaining health, but capable of causing oral and systemic diseases. Thus, an in-depth definition of "healthy oral microbiome" is critical to understanding variations in disease states from preclinical conditions, and disease onset through progressive states of disease. With rapid advances in DNA sequencing and analytical technologies, population-based studies have documented the range and diversity of both taxonomic compositions and functional potentials observed in the oral microbiome in healthy individuals. Besides factors specific to the host, such as age and race/ethnicity, environmental factors also appear to contribute to the variability of the healthy oral microbiome. Here, we review bioinformatic techniques for metagenomic datasets, including their strengths and limitations. In addition, we summarize the interpersonal and intrapersonal diversity of the oral microbiome, taking into consideration the recent large-scale and longitudinal studies, including the Human Microbiome Project. [BMB Reports 2016; 49(12): 662-670].

  3. Achievements and challenges in structural bioinformatics and computational biophysics

    PubMed Central

    Samish, Ilan; Bourne, Philip E.; Najmanovich, Rafael J.

    2015-01-01

    Motivation: The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. Results: An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. Conclusion: The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. Contact: Rafael.Najmanovich@USherbrooke.ca PMID:25488929

  4. Achievements and challenges in structural bioinformatics and computational biophysics.

    PubMed

    Samish, Ilan; Bourne, Philip E; Najmanovich, Rafael J

    2015-01-01

    The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. © The Author 2014. Published by Oxford University Press.

  5. Quantum Bio-Informatics II From Quantum Information to Bio-Informatics

    NASA Astrophysics Data System (ADS)

    Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori

    2009-02-01

    The problem of quantum-like representation in economy cognitive science, and genetics / L. Accardi, A. Khrennikov and M. Ohya -- Chaotic behavior observed in linea dynamics / M. Asano, T. Yamamoto and Y. Togawa -- Complete m-level quantum teleportation based on Kossakowski-Ohya scheme / M. Asano, M. Ohya and Y. Tanaka -- Towards quantum cybernetics: optimal feedback control in quantum bio informatics / V. P. Belavkin -- Quantum entanglement and circulant states / D. Chruściński -- The compound Fock space and its application in brain models / K. -H. Fichtner and W. Freudenberg -- Characterisation of beam splitters / L. Fichtner and M. Gäbler -- Application of entropic chaos degree to a combined quantum baker's map / K. Inoue, M. Ohya and I. V. Volovich -- On quantum algorithm for multiple alignment of amino acid sequences / S. Iriyama and M. Ohya --Quantum-like models for decision making in psychology and cognitive science / A. Khrennikov -- On completely positive non-Markovian evolution of a d-level system / A. Kossakowski and R. Rebolledo -- Measures of entanglement - a Hilbert space approach / W. A. Majewski -- Some characterizations of PPT states and their relation / T. Matsuoka -- On the dynamics of entanglement and characterization ofentangling properties of quantum evolutions / M. Michalski -- Perspective from micro-macro duality - towards non-perturbative renormalization scheme / I. Ojima -- A simple symmetric algorithm using a likeness with Introns behavior in RNA sequences / M. Regoli -- Some aspects of quadratic generalized white noise functionals / Si Si and T. Hida -- Analysis of several social mobility data using measure of departure from symmetry / K. Tahata ... [et al.] -- Time in physics and life science / I. V. Volovich -- Note on entropies in quantum processes / N. Watanabe -- Basics of molecular simulation and its application to biomolecules / T. Ando and I. Yamato -- Theory of proton-induced superionic conduction in hydrogen-bonded systems

  6. Wrapping and interoperating bioinformatics resources using CORBA.

    PubMed

    Stevens, R; Miller, C

    2000-02-01

    Bioinformaticians seeking to provide services to working biologists are faced with the twin problems of distribution and diversity of resources. Bioinformatics databases are distributed around the world and exist in many kinds of storage forms, platforms and access paradigms. To provide adequate services to biologists, these distributed and diverse resources have to interoperate seamlessly within single applications. The Common Object Request Broker Architecture (CORBA) offers one technical solution to these problems. The key component of CORBA is its use of object orientation as an intermediate form to translate between different representations. This paper concentrates on an explanation of object orientation and how it can be used to overcome the problems of distribution and diversity by describing the interfaces between objects.

  7. Bioinformatics and molecular modeling in glycobiology

    PubMed Central

    Schloissnig, Siegfried

    2010-01-01

    The field of glycobiology is concerned with the study of the structure, properties, and biological functions of the family of biomolecules called carbohydrates. Bioinformatics for glycobiology is a particularly challenging field, because carbohydrates exhibit a high structural diversity and their chains are often branched. Significant improvements in experimental analytical methods over recent years have led to a tremendous increase in the amount of carbohydrate structure data generated. Consequently, the availability of databases and tools to store, retrieve and analyze these data in an efficient way is of fundamental importance to progress in glycobiology. In this review, the various graphical representations and sequence formats of carbohydrates are introduced, and an overview of newly developed databases, the latest developments in sequence alignment and data mining, and tools to support experimental glycan analysis are presented. Finally, the field of structural glycoinformatics and molecular modeling of carbohydrates, glycoproteins, and protein–carbohydrate interaction are reviewed. PMID:20364395

  8. Rapid Bioinformatic Identification of Thermostabilizing Mutations

    PubMed Central

    Sauer, David B.; Karpowich, Nathan K.; Song, Jin Mei; Wang, Da-Neng

    2015-01-01

    Ex vivo stability is a valuable protein characteristic but is laborious to improve experimentally. In addition to biopharmaceutical and industrial applications, stable protein is important for biochemical and structural studies. Taking advantage of the large number of available genomic sequences and growth temperature data, we present two bioinformatic methods to identify a limited set of amino acids or positions that likely underlie thermostability. Because these methods allow thousands of homologs to be examined in silico, they have the advantage of providing both speed and statistical power. Using these methods, we introduced, via mutation, amino acids from thermoadapted homologs into an exemplar mesophilic membrane protein, and demonstrated significantly increased thermostability while preserving protein activity. PMID:26445442

  9. Bioinformatics Resources for MicroRNA Discovery

    PubMed Central

    Moore, Alyssa C.; Winkjer, Jonathan S.; Tseng, Tsai-Tien

    2015-01-01

    Biomarker identification is often associated with the diagnosis and evaluation of various diseases. Recently, the role of microRNA (miRNA) has been implicated in the development of diseases, particularly cancer. With the advent of next-generation sequencing, the amount of data on miRNA has increased tremendously in the last decade, requiring new bioinformatics approaches for processing and storing new information. New strategies have been developed in mining these sequencing datasets to allow better understanding toward the actions of miRNAs. As a result, many databases have also been established to disseminate these findings. This review focuses on several curated databases of miRNAs and their targets from both predicted and validated sources. PMID:26819547

  10. The European Bioinformatics Institute's data resources

    PubMed Central

    Brooksbank, Catherine; Camon, Evelyn; Harris, Midori A.; Magrane, Michele; Martin, Maria Jesus; Mulder, Nicola; O'Donovan, Claire; Parkinson, Helen; Tuli, Mary Ann; Apweiler, Rolf; Birney, Ewan; Brazma, Alvis; Henrick, Kim; Lopez, Rodrigo; Stoesser, Guenter; Stoehr, Peter; Cameron, Graham

    2003-01-01

    As the amount of biological data grows, so does the need for biologists to store and access this information in central repositories in a free and unambiguous manner. The European Bioinformatics Institute (EBI) hosts six core databases, which store information on DNA sequences (EMBL-Bank), protein sequences (SWISS-PROT and TrEMBL), protein structure (MSD), whole genomes (Ensembl) and gene expression (ArrayExpress). But just as a cell would be useless if it couldn't transcribe DNA or translate RNA, our resources would be compromised if each existed in isolation. We have therefore developed a range of tools that not only facilitate the deposition and retrieval of biological information, but also allow users to carry out searches that reflect the interconnectedness of biological information. The EBI's databases and tools are all available on our website at www.ebi.ac.uk. PMID:12519944

  11. Survey: Translational Bioinformatics embraces Big Data

    PubMed Central

    Shah, Nigam H.

    2015-01-01

    Summary We review the latest trends and major developments in translational bioinformatics in the year 2011–2012. Our emphasis is on highlighting the key events in the field and pointing at promising research areas for the future. The key take-home points are: Translational informatics is ready to revolutionize human health and healthcare using large-scale measurements on individuals.Data–centric approaches that compute on massive amounts of data (often called “Big Data”) to discover patterns and to make clinically relevant predictions will gain adoption.Research that bridges the latest multimodal measurement technologies with large amounts of electronic healthcare data is increasing; and is where new breakthroughs will occur. PMID:22890354

  12. Combining multiple decisions: applications to bioinformatics

    NASA Astrophysics Data System (ADS)

    Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.

  13. The European Bioinformatics Institute's data resources.

    PubMed

    Brooksbank, Catherine; Cameron, Graham; Thornton, Janet

    2010-01-01

    The wide uptake of next-generation sequencing and other ultra-high throughput technologies by life scientists with a diverse range of interests, spanning fundamental biological research, medicine, agriculture and environmental science, has led to unprecedented growth in the amount of data generated. It has also put the need for unrestricted access to biological data at the centre of biology. The European Bioinformatics Institute (EMBL-EBI) is unique in Europe and is one of only two organisations worldwide providing access to a comprehensive, integrated set of these collections. Here, we describe how the EMBL-EBI's biomolecular databases are evolving to cope with increasing levels of submission, a growing and diversifying user base, and the demand for new types of data. All of the resources described here can be accessed from the EMBL-EBI website: http://www.ebi.ac.uk.

  14. Bioinformatics by Example: From Sequence to Target

    NASA Astrophysics Data System (ADS)

    Kossida, Sophia; Tahri, Nadia; Daizadeh, Iraj

    2002-12-01

    With the completion of the human genome, and the imminent completion of other large-scale sequencing and structure-determination projects, computer-assisted bioscience is aimed to become the new paradigm for conducting basic and applied research. The presence of these additional bioinformatics tools stirs great anxiety for experimental researchers (as well as for pedagogues), since they are now faced with a wider and deeper knowledge of differing disciplines (biology, chemistry, physics, mathematics, and computer science). This review targets those individuals who are interested in using computational methods in their teaching or research. By analyzing a real-life, pharmaceutical, multicomponent, target-based example the reader will experience this fascinating new discipline.

  15. Knowledge from data in the built environment.

    PubMed

    Starkey, Christopher; Garvin, Chris

    2013-08-01

    Data feedback is changing our relationship to the built environment. Both traditional and new sources of data are developing rapidly, compelled by efforts to optimize the performance of human habitats. However, there are many obstacles to the successful implementation of information-centered environments that continue to hinder widespread adoption. This paper identifies these obstacles and challenges and describes emerging data-rich analytic techniques in infrastructure, buildings, and building portfolios. Further, it speculates on the impact that a robust data sphere may have on the built environment and posits that linkages to other data sets may enable paradigm shifts in sustainability and resiliency.

  16. Built Environment Energy Analysis Tool Overview (Presentation)

    SciTech Connect

    Porter, C.

    2013-04-01

    This presentation provides an overview of the Built Environment Energy Analysis Tool, which is designed to assess impacts of future land use/built environment patterns on transportation-related energy use and greenhouse gas (GHG) emissions. The tool can be used to evaluate a range of population distribution and urban design scenarios for 2030 and 2050. This tool was produced as part of the Transportation Energy Futures (TEF) project, a Department of Energy-sponsored multi-agency project initiated to pinpoint underexplored strategies for abating GHGs and reducing petroleum dependence related to transportation.

  17. Bioinformatics: towards new directions for public health.

    PubMed

    Maojo, V; Martin-Sanchez, F

    2004-01-01

    Epidemiologists are reformulating their classical approaches to diseases by considering various issues associated to "omics" areas and technologies. Traditional differences between epidemiology and genetics include background, training, terminologies, study designs and others. Public health and epidemiology are increasingly looking forward to using methodologies and informatics tools, facilitated by the Bioinformatics community, for managing genomic information. Our aim is to describe which are the most important implications related with the increasing use of genomic information for public health practice, research and education. To review the contribution of bioinformatics to these issues, in terms of providing the methods and tools needed for processing genetic information from pathogens and patients. To analyze the research challenges in biomedical informatics related with the need of integration of clinical, environmental and genetic data and the new scenarios arisen in public health. Review of the literature, Internet resources and material and reports generated by internal and external research projects. New developments are needed to advance in the study of the interactions between environmental agents and genetic factors involved in the development of diseases. The use of biomarkers, biobanks, and integrated genomic/clinical databases poses serious challenges for informaticians in order to extract useful information and knowledge for public health, biomedical research and healthcare. From an informatics perspective, integrated medical/biological ontologies and new semantic-based models for managing information provide new challenges for research in areas such as genetic epidemiology and the "omics" disciplines, among others. In this regard, there are various ethical, privacy, informed consent and social implications, that should be carefully addressed by researchers, practitioners and policy makers.

  18. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    ERIC Educational Resources Information Center

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  19. Is there room for ethics within bioinformatics education?

    PubMed

    Taneri, Bahar

    2011-07-01

    When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.

  20. Assessment of a Bioinformatics across Life Science Curricula Initiative

    ERIC Educational Resources Information Center

    Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.

    2007-01-01

    At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…

  1. Bioinformatics education dissemination with an evolutionary problem solving perspective.

    PubMed

    Jungck, John R; Donovan, Samuel S; Weisstein, Anton E; Khiripet, Noppadon; Everse, Stephen J

    2010-11-01

    Bioinformatics is central to biology education in the 21st century. With the generation of terabytes of data per day, the application of computer-based tools to stored and distributed data is fundamentally changing research and its application to problems in medicine, agriculture, conservation and forensics. In light of this 'information revolution,' undergraduate biology curricula must be redesigned to prepare the next generation of informed citizens as well as those who will pursue careers in the life sciences. The BEDROCK initiative (Bioinformatics Education Dissemination: Reaching Out, Connecting and Knitting together) has fostered an international community of bioinformatics educators. The initiative's goals are to: (i) Identify and support faculty who can take leadership roles in bioinformatics education; (ii) Highlight and distribute innovative approaches to incorporating evolutionary bioinformatics data and techniques throughout undergraduate education; (iii) Establish mechanisms for the broad dissemination of bioinformatics resource materials and teaching models; (iv) Emphasize phylogenetic thinking and problem solving; and (v) Develop and publish new software tools to help students develop and test evolutionary hypotheses. Since 2002, BEDROCK has offered more than 50 faculty workshops around the world, published many resources and supported an environment for developing and sharing bioinformatics education approaches. The BEDROCK initiative builds on the established pedagogical philosophy and academic community of the BioQUEST Curriculum Consortium to assemble the diverse intellectual and human resources required to sustain an international reform effort in undergraduate bioinformatics education.

  2. Evaluating an Inquiry-Based Bioinformatics Course Using Q Methodology

    ERIC Educational Resources Information Center

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-01-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…

  3. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    ERIC Educational Resources Information Center

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  4. Evaluating an Inquiry-Based Bioinformatics Course Using Q Methodology

    ERIC Educational Resources Information Center

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-01-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…

  5. Assessment of a Bioinformatics across Life Science Curricula Initiative

    ERIC Educational Resources Information Center

    Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.

    2007-01-01

    At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…

  6. The microbiome of the built environment and mental health.

    PubMed

    Hoisington, Andrew J; Brenner, Lisa A; Kinney, Kerry A; Postolache, Teodor T; Lowry, Christopher A

    2015-12-17

    The microbiome of the built environment (MoBE) is a relatively new area of study. While some knowledge has been gained regarding impacts of the MoBE on the human microbiome and disease vulnerability, there is little knowledge of the impacts of the MoBE on mental health. Depending on the specific microbial species involved, the transfer of microorganisms from the built environment to occupant's cutaneous or mucosal membranes has the potential to increase or disrupt immunoregulation and/or exaggerate or suppress inflammation. Preclinical evidence highlighting the influence of the microbiota on systemic inflammation supports the assertion that microorganisms, including those originating from the built environment, have the potential to either increase or decrease the risk of inflammation-induced psychiatric conditions and their symptom severity. With advanced understanding of both the ecology of the built environment, and its influence on the human microbiome, it may be possible to develop bioinformed strategies for management of the built environment to promote mental health. Here we present a brief summary of microbiome research in both areas and highlight two interdependencies including the following: (1) effects of the MoBE on the human microbiome and (2) potential opportunities for manipulation of the MoBE in order to improve mental health. In addition, we propose future research directions including strategies for assessment of changes in the microbiome of common areas of built environments shared by multiple human occupants, and associated cohort-level changes in the mental health of those who spend time in the buildings. Overall, our understanding of the fields of both the MoBE and influence of host-associated microorganisms on mental health are advancing at a rapid pace and, if linked, could offer considerable benefit to health and wellness.

  7. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    PubMed Central

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  8. Comparison of Online and Onsite Bioinformatics Instruction for a Fully Online Bioinformatics Master’s Program

    PubMed Central

    Obom, Kristina M.; Cummings, Patrick J.

    2007-01-01

    The completely online Master of Science in Bioinformatics program differs from the onsite program only in the mode of content delivery. Analysis of student satisfaction indicates no statistically significant difference between most online and onsite student responses, however, online and onsite students do differ significantly in their responses to a few questions on the course evaluation queries. Analysis of student exam performance using three assessments indicates that there was no significant difference in grades earned by students in online and onsite courses. These results suggest that our model for online bioinformatics education provides students with a rigorous course of study that is comparable to onsite course instruction and possibly provides a more rigorous course load and more opportunities for participation. PMID:23653816

  9. Comparison of Online and Onsite Bioinformatics Instruction for a Fully Online Bioinformatics Master's Program.

    PubMed

    Obom, Kristina M; Cummings, Patrick J

    2007-01-01

    The completely online Master of Science in Bioinformatics program differs from the onsite program only in the mode of content delivery. Analysis of student satisfaction indicates no statistically significant difference between most online and onsite student responses, however, online and onsite students do differ significantly in their responses to a few questions on the course evaluation queries. Analysis of student exam performance using three assessments indicates that there was no significant difference in grades earned by students in online and onsite courses. These results suggest that our model for online bioinformatics education provides students with a rigorous course of study that is comparable to onsite course instruction and possibly provides a more rigorous course load and more opportunities for participation.

  10. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  11. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  12. Smart built-in test for nuclear thermal propulsion

    SciTech Connect

    Lombrozo, P.C. )

    1992-03-01

    Smart built-in test (BIT) technologies are envisioned for nuclear thermal propulsion spacecraft components which undergo constant irradiation and are therefore unsafe for manual testing. Smart BIT systems of automated/remote type allow component and system tests to be conducted; failure detections are directly followed by reconfiguration of the components affected. The 'smartness' of the BIT system in question involves the reduction of sensor counts via the use of multifunction sensors, the use of components as integral sensors, and the use of system design techniques which allow the verification of system function beyond component connectivity.

  13. Simple and Cooperatively Built Wave Motion Demonstrator

    ERIC Educational Resources Information Center

    Cortel, Adolf

    2006-01-01

    Some designs of simple wave demonstration devices have been described in this journal and elsewhere. A new simple model can be built using only dowels, binder clips, and loops of thread. Not only can it be easily assembled, stored, or disassembled, but also all the students in a class can cooperate in its building by connecting successive pieces…

  14. Simple and Cooperatively Built Wave Motion Demonstrator

    ERIC Educational Resources Information Center

    Cortel, Adolf

    2006-01-01

    Some designs of simple wave demonstration devices have been described in this journal and elsewhere. A new simple model can be built using only dowels, binder clips, and loops of thread. Not only can it be easily assembled, stored, or disassembled, but also all the students in a class can cooperate in its building by connecting successive pieces…

  15. Schooling Built on the Multiple Intelligences

    ERIC Educational Resources Information Center

    Kunkel, Christine D.

    2009-01-01

    This article features a school built on multiple intelligences. As the first multiple intelligences school in the world, the Key Learning Community shapes its students' days to include significant time in the musical, spatial and bodily-kinesthetic intelligences, as well as the more traditional areas of logical-mathematical and linguistics. In…

  16. [Interpreting Historic Sites & the Built Environment.

    ERIC Educational Resources Information Center

    Yellis, Ken, Ed.

    1985-01-01

    This issue focuses on the interpretation of built environments, from Washington Irving's 19th century home in Tarrytown, New York, to structures in contemporary Chicago. Barbara Carson, Margaret Piatt, and Renee Friedman discuss the interpretation of interior and exterior spaces and explain how to teach history with objects instead of teaching the…

  17. Schooling Built on the Multiple Intelligences

    ERIC Educational Resources Information Center

    Kunkel, Christine D.

    2009-01-01

    This article features a school built on multiple intelligences. As the first multiple intelligences school in the world, the Key Learning Community shapes its students' days to include significant time in the musical, spatial and bodily-kinesthetic intelligences, as well as the more traditional areas of logical-mathematical and linguistics. In…

  18. Built Environment Correlates of Walking: A Review

    PubMed Central

    Saelens, Brian E.; Handy, Susan L.

    2010-01-01

    Introduction The past decade has seen a dramatic increase in the empirical investigation into the relations between built environmental and physical activity. To create places that facilitate and encourage walking, practitioners need an understanding of the specific characteristics of the built environment that correlate most strongly with walking. This paper reviews evidence on the built environment correlates with walking. Method Included in this review were 13 reviews published between 2002 and 2006 and 29 original studies published in 2005 and up through May 2006. Results were summarized based on specific characteristics of the built environment and transportation walking versus recreational walking. Results Previous reviews and newer studies document consistent positive relations between walking for transportation and density, distance to non-residential destinations, and land use mix; findings for route/network connectivity, parks and open space, and personal safety are more equivocal. Results regarding recreational walking were less clear. Conclusions More recent evidence supports the conclusions of prior reviews, and new studies address some of the limitations of earlier studies. Although prospective studies are needed, evidence on correlates appears sufficient to support policy changes. PMID:18562973

  19. Bioinformatics for Diagnostics, Forensics, and Virulence Characterization and Detection

    SciTech Connect

    Gardner, S; Slezak, T

    2005-04-05

    We summarize four of our group's high-risk/high-payoff research projects funded by the Intelligence Technology Innovation Center (ITIC) in conjunction with our DHS-funded pathogen informatics activities. These are (1) quantitative assessment of genomic sequencing needs to predict high quality DNA and protein signatures for detection, and comparison of draft versus finished sequences for diagnostic signature prediction; (2) development of forensic software to identify SNP and PCR-RFLP variations from a large number of viral pathogen sequences and optimization of the selection of markers for maximum discrimination of those sequences; (3) prediction of signatures for the detection of virulence, antibiotic resistance, and toxin genes and genetic engineering markers in bacteria; (4) bioinformatic characterization of virulence factors to rapidly screen genomic data for potential genes with similar functions and to elucidate potential health threats in novel organisms. The results of (1) are being used by policy makers to set national sequencing priorities. Analyses from (2) are being used in collaborations with the CDC to genotype and characterize many variola strains, and reports from these collaborations have been made to the President. We also determined SNPs for serotype and strain discrimination of 126 foot and mouth disease virus (FMDV) genomes. For (3), currently >1000 probes have been predicted for the specific detection of >4000 virulence, antibiotic resistance, and genetic engineering vector sequences, and we expect to complete the bioinformatic design of a comprehensive ''virulence detection chip'' by August 2005. Results of (4) will be a system to rapidly predict potential virulence pathways and phenotypes in organisms based on their genomic sequences.

  20. Bioinformatic Analysis of HIV-1 Entry and Pathogenesis

    PubMed Central

    Aiamkitsumrit, Benjamas; Dampier, Will; Antell, Gregory; Rivera, Nina; Martin-Garcia, Julio; Pirrone, Vanessa; Nonnemacher, Michael R.; Wigdahl, Brian

    2015-01-01

    The evolution of human immunodeficiency virus type 1 (HIV-1) with respect to co-receptor utilization has been shown to be relevant to HIV-1 pathogenesis and disease. The CCR5-utilizing (R5) virus has been shown to be important in the very early stages of transmission and highly prevalent during asymptomatic infection and chronic disease. In addition, the R5 virus has been proposed to be involved in neuroinvasion and central nervous system (CNS) disease. In contrast, the CXCR4-utilizing (X4) virus is more prevalent during the course of disease progression and concurrent with the loss of CD4+ T cells. The dual-tropic virus is able to utilize both co-receptors (CXCR4 and CCR5) and has been thought to represent an intermediate transitional virus that possesses properties of both X4 and R5 viruses that can be encountered at many stages of disease. The use of computational tools and bioinformatic approaches in the prediction of HIV-1 co-receptor usage has been growing in importance with respect to understanding HIV-1 pathogenesis and disease, developing diagnostic tools, and improving the efficacy of therapeutic strategies focused on blocking viral entry. Current strategies have enhanced the sensitivity, specificity, and reproducibility relative to the prediction of co-receptor use; however, these technologies need to be improved with respect to their efficient and accurate use across the HIV-1 subtypes. The most effective approach may center on the combined use of different algorithms involving sequences within and outside of the env-V3 loop. This review focuses on the HIV-1 entry process and on co-receptor utilization, including bioinformatic tools utilized in the prediction of co-receptor usage. It also provides novel preliminary analyses for enabling identification of linkages between amino acids in V3 with other components of the HIV-1 genome and demonstrates that these linkages are different between X4 and R5 viruses. PMID:24862329

  1. Bioinformatic approaches to augment study of epithelial-to-mesenchymal transition in lung cancer

    PubMed Central

    Beck, Tim N.; Chikwem, Adaeze J.; Solanki, Nehal R.

    2014-01-01

    Bioinformatic approaches are intended to provide systems level insight into the complex biological processes that underlie serious diseases such as cancer. In this review we describe current bioinformatic resources, and illustrate how they have been used to study a clinically important example: epithelial-to-mesenchymal transition (EMT) in lung cancer. Lung cancer is the leading cause of cancer-related deaths and is often diagnosed at advanced stages, leading to limited therapeutic success. While EMT is essential during development and wound healing, pathological reactivation of this program by cancer cells contributes to metastasis and drug resistance, both major causes of death from lung cancer. Challenges of studying EMT include its transient nature, its molecular and phenotypic heterogeneity, and the complicated networks of rewired signaling cascades. Given the biology of lung cancer and the role of EMT, it is critical to better align the two in order to advance the impact of precision oncology. This task relies heavily on the application of bioinformatic resources. Besides summarizing recent work in this area, we use four EMT-associated genes, TGF-β (TGFB1), NEDD9/HEF1, β-catenin (CTNNB1) and E-cadherin (CDH1), as exemplars to demonstrate the current capacities and limitations of probing bioinformatic resources to inform hypothesis-driven studies with therapeutic goals. PMID:25096367

  2. Design of Wrapper Integration Within the DataFoundry Bioinformatics Application

    SciTech Connect

    Anderson, J; Critchlow, T

    2002-08-20

    The DataFoundry bioinformatics application was designed to enable scientists to directly interact with large datasets, gathered from multiple remote data sources, through a graphical, interactive interface. Gathering information from multiple data sources, integrating that data, and providing an interface to the accumulated data is non-trivial. Advanced techniques are required to develop a solution that adequately completes this task. One possible solution to this problem involves the use of specialized information access programs that are able to access information and transmute that information to a form usable by a single application. These information access programs, called wrappers, were decided to be the most appropriate way to extend the DataFoundry bioinformatics application to support data integration from multiple sources. By adding wrapper support into the DataFoundry application, it is hoped that this system will be able to provide a single access point to bioinformatics data for scientists. We describe some of the computer science concepts, design, and the implementation of adding wrapper support into the DataFoundry bioinformatics application, and then discuss issues of performance.

  3. [Application of bioinformatics in researches of industrial biocatalysis].

    PubMed

    Yu, Hui-Min; Luo, Hui; Shi, Yue; Sun, Xu-Dong; Shen, Zhong-Yao

    2004-05-01

    Industrial biocatalysis is currently attracting much attention to rebuild or substitute traditional producing process of chemicals and drugs. One of key focuses in industrial biocatalysis is biocatalyst, which is usually one kind of microbial enzyme. In the recent, new technologies of bioinformatics have played and will continue to play more and more significant roles in researches of industrial biocatalysis in response to the waves of genomic revolution. One of the key applications of bioinformatics in biocatalysis is the discovery and identification of the new biocatalyst through advanced DNA and protein sequence search, comparison and analyses in Internet database using different algorithm and software. The unknown genes of microbial enzymes can also be simply harvested by primer design on the basis of bioinformatics analyses. The other key applications of bioinformatics in biocatalysis are the modification and improvement of existing industrial biocatalyst. In this aspect, bioinformatics is of great importance in both rational design and directed evolution of microbial enzymes. Based on the successful prediction of tertiary structures of enzymes using the tool of bioinformatics, the undermentioned experiments, i.e. site-directed mutagenesis, fusion protein construction, DNA family shuffling and saturation mutagenesis, etc, are usually of very high efficiency. On all accounts, bioinformatics will be an essential tool for either biologist or biological engineer in the future researches of industrial biocatalysis, due to its significant function in guiding and quickening the step of discovery and/or improvement of novel biocatalysts.

  4. Scalability and Validation of Big Data Bioinformatics Software.

    PubMed

    Yang, Andrian; Troup, Michael; Ho, Joshua W K

    2017-01-01

    This review examines two important aspects that are central to modern big data bioinformatics analysis - software scalability and validity. We argue that not only are the issues of scalability and validation common to all big data bioinformatics analyses, they can be tackled by conceptually related methodological approaches, namely divide-and-conquer (scalability) and multiple executions (validation). Scalability is defined as the ability for a program to scale based on workload. It has always been an important consideration when developing bioinformatics algorithms and programs. Nonetheless the surge of volume and variety of biological and biomedical data has posed new challenges. We discuss how modern cloud computing and big data programming frameworks such as MapReduce and Spark are being used to effectively implement divide-and-conquer in a distributed computing environment. Validation of software is another important issue in big data bioinformatics that is often ignored. Software validation is the process of determining whether the program under test fulfils the task for which it was designed. Determining the correctness of the computational output of big data bioinformatics software is especially difficult due to the large input space and complex algorithms involved. We discuss how state-of-the-art software testing techniques that are based on the idea of multiple executions, such as metamorphic testing, can be used to implement an effective bioinformatics quality assurance strategy. We hope this review will raise awareness of these critical issues in bioinformatics.

  5. Continuing Education Workshops in Bioinformatics Positively Impact Research and Careers.

    PubMed

    Brazas, Michelle D; Ouellette, B F Francis

    2016-06-01

    Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.

  6. Measuring the Built Environment for Physical Activity

    PubMed Central

    Brownson, Ross C.; Hoehner, Christine M.; Day, Kristen; Forsyth, Ann; Sallis, James F.

    2009-01-01

    Physical inactivity is one of the most important public health issues in the U.S. and internationally. Increasingly, links are being identified between various elements of the physical—or built—environment and physical activity. To understand the impact of the built environment on physical activity, the development of high-quality measures is essential. Three categories of built environment data are being used: (1) perceived measures obtained by telephone interview or self-administered questionnaires; (2) observational measures obtained using systematic observational methods (audits); and (3) archival data sets that are often layered and analyzed with GIS. This review provides a critical assessment of these three types of built-environment measures relevant to the study of physical activity. Among perceived measures, 19 questionnaires were reviewed, ranging in length from 7 to 68 questions. Twenty audit tools were reviewed that cover community environments (i.e., neighborhoods, cities), parks, and trails. For GIS-derived measures, more than 50 studies were reviewed. A large degree of variability was found in the operationalization of common GIS measures, which include population density, land-use mix, access to recreational facilities, and street pattern. This first comprehensive examination of built-environment measures demonstrates considerable progress over the past decade, showing diverse environmental variables available that use multiple modes of assessment. Most can be considered first-generation measures, so further development is needed. In particular, further research is needed to improve the technical quality of measures, understand the relevance to various population groups, and understand the utility of measures for science and public health. PMID:19285216

  7. Online Tools for Bioinformatics Analyses in Nutrition Sciences12

    PubMed Central

    Malkaram, Sridhar A.; Hassan, Yousef I.; Zempleni, Janos

    2012-01-01

    Recent advances in “omics” research have resulted in the creation of large datasets that were generated by consortiums and centers, small datasets that were generated by individual investigators, and bioinformatics tools for mining these datasets. It is important for nutrition laboratories to take full advantage of the analysis tools to interrogate datasets for information relevant to genomics, epigenomics, transcriptomics, proteomics, and metabolomics. This review provides guidance regarding bioinformatics resources that are currently available in the public domain, with the intent to provide a starting point for investigators who want to take advantage of the opportunities provided by the bioinformatics field. PMID:22983844

  8. Thriving in Multidisciplinary Research: Advice for New Bioinformatics Students

    PubMed Central

    Auerbach, Raymond K.

    2012-01-01

    The sciences have seen a large increase in demand for students in bioinformatics and multidisciplinary fields in general. Many new educational programs have been created to satisfy this demand, but navigating these programs requires a non-traditional outlook and emphasizes working in teams of individuals with distinct yet complementary skill sets. Written from the perspective of a current bioinformatics student, this article seeks to offer advice to prospective and current students in bioinformatics regarding what to expect in their educational program, how multidisciplinary fields differ from more traditional paths, and decisions that they will face on the road to becoming successful, productive bioinformaticists. PMID:23012580

  9. Thriving in multidisciplinary research: advice for new bioinformatics students.

    PubMed

    Auerbach, Raymond K

    2012-09-01

    The sciences have seen a large increase in demand for students in bioinformatics and multidisciplinary fields in general. Many new educational programs have been created to satisfy this demand, but navigating these programs requires a non-traditional outlook and emphasizes working in teams of individuals with distinct yet complementary skill sets. Written from the perspective of a current bioinformatics student, this article seeks to offer advice to prospective and current students in bioinformatics regarding what to expect in their educational program, how multidisciplinary fields differ from more traditional paths, and decisions that they will face on the road to becoming successful, productive bioinformaticists.

  10. Built-Environment Wind Turbine Roadmap

    SciTech Connect

    Smith, J.; Forsyth, T.; Sinclair, K.; Oteri, F.

    2012-11-01

    Although only a small contributor to total electricity production needs, built-environment wind turbines (BWTs) nonetheless have the potential to influence the public's consideration of renewable energy, and wind energy in particular. Higher population concentrations in urban environments offer greater opportunities for project visibility and an opportunity to acquaint large numbers of people to the advantages of wind projects on a larger scale. However, turbine failures will be equally visible and could have a negative effect on public perception of wind technology. This roadmap provides a framework for achieving the vision set forth by the attendees of the Built-Environment Wind Turbine Workshop on August 11 - 12, 2010, at the U.S. Department of Energy's National Renewable Energy Laboratory. The BWT roadmap outlines the stakeholder actions that could be taken to overcome the barriers identified. The actions are categorized as near-term (0 - 3 years), medium-term (4 - 7 years), and both near- and medium-term (requiring immediate to medium-term effort). To accomplish these actions, a strategic approach was developed that identifies two focus areas: understanding the built-environment wind resource and developing testing and design standards. The authors summarize the expertise and resources required in these areas.

  11. 2. EAST ELEVATION OF IPA FACTORY; TWOSTORY SECTION BUILT IN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. EAST ELEVATION OF IPA FACTORY; TWO-STORY SECTION BUILT IN 1892 AND PARTIALLY DESTROYED PARAPET SECTION BUILT CA. 1948. BRICK CHIMNEY ALSO BUILT CA. 1948. - Illinois Pure Aluminum Company, 109 Holmes Street, Lemont, Cook County, IL

  12. One Bedroom Units: Floor Plan, South Elevation (As Built), North ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    One Bedroom Units: Floor Plan, South Elevation (As Built), North Elevation (As Built), Section A-A (As Built), Section AA (Existing) - Aluminum City Terrace, East Hill Drive, New Kensington, Westmoreland County, PA

  13. [piRNAs: Biology and Bioinformatics].

    PubMed

    Zharikova, A A; Mironov, A A

    2016-01-01

    The discovery of small noncoding RNAs and their roles in a variety of regulatory mechanisms have led many scientists to look at the principles of functioning of the cells on a completely different side. Small RNA molecules play key roles in important processes such as the co- and posttranscriptional regulation of gene expression, epigenetic modification of DNA and histones and antiviral protection. piRNA is one of the most numerous, although the least-studied class of small noncoding RNAs. piRNA is highly expressed in the germ line of most eukaryotes and its main function is to regulate the activity of mobile elements during embryonic development. Moreover, recent studies reveal moderate activity of piRNA in somatic cells. However, the mechanisms of piRNA biogenesis and function are still poorly understood and are the object of intensive researches. This review presents actual information about the biogenesis and various functions of piRNA, as well as bioinformatical aspects of this field of molecular biology.

  14. Bioinformatics of the TULIP domain superfamily.

    PubMed

    Kopec, Klaus O; Alva, Vikram; Lupas, Andrei N

    2011-08-01

    Proteins of the BPI (bactericidal/permeability-increasing protein)-like family contain either one or two tandem copies of a fold that usually provides a tubular cavity for the binding of lipids. Bioinformatic analyses show that, in addition to its known members, which include BPI, LBP [LPS (lipopolysaccharide)-binding protein)], CETP (cholesteryl ester-transfer protein), PLTP (phospholipid-transfer protein) and PLUNC (palate, lung and nasal epithelium clone) protein, this family also includes other, more divergent groups containing hypothetical proteins from fungi, nematodes and deep-branching unicellular eukaryotes. More distantly, BPI-like proteins are related to a family of arthropod proteins that includes hormone-binding proteins (Takeout-like; previously described to adopt a BPI-like fold), allergens and several groups of uncharacterized proteins. At even greater evolutionary distance, BPI-like proteins are homologous with the SMP (synaptotagmin-like, mitochondrial and lipid-binding protein) domains, which are found in proteins associated with eukaryotic membrane processes. In particular, SMP domain-containing proteins of yeast form the ERMES [ER (endoplasmic reticulum)-mitochondria encounter structure], required for efficient phospholipid exchange between these organelles. This suggests that SMP domains themselves bind lipids and mediate their exchange between heterologous membranes. The most distant group of homologues we detected consists of uncharacterized animal proteins annotated as TM (transmembrane) 24. We propose to group these families together into one superfamily that we term as the TULIP (tubular lipid-binding) domain superfamily.

  15. Bioinformatic tools for microRNA dissection

    PubMed Central

    Akhtar, Most Mauluda; Micolucci, Luigina; Islam, Md Soriful; Olivieri, Fabiola; Procopio, Antonio Domenico

    2016-01-01

    Recently, microRNAs (miRNAs) have emerged as important elements of gene regulatory networks. MiRNAs are endogenous single-stranded non-coding RNAs (∼22-nt long) that regulate gene expression at the post-transcriptional level. Through pairing with mRNA, miRNAs can down-regulate gene expression by inhibiting translation or stimulating mRNA degradation. In some cases they can also up-regulate the expression of a target gene. MiRNAs influence a variety of cellular pathways that range from development to carcinogenesis. The involvement of miRNAs in several human diseases, particularly cancer, makes them potential diagnostic and prognostic biomarkers. Recent technological advances, especially high-throughput sequencing, have led to an exponential growth in the generation of miRNA-related data. A number of bioinformatic tools and databases have been devised to manage this growing body of data. We analyze 129 miRNA tools that are being used in diverse areas of miRNA research, to assist investigators in choosing the most appropriate tools for their needs. PMID:26578605

  16. Bioinformatics study of the mangrove actin genes

    NASA Astrophysics Data System (ADS)

    Basyuni, M.; Wasilah, M.; Sumardi

    2017-01-01

    This study describes the bioinformatics methods to analyze eight actin genes from mangrove plants on DDBJ/EMBL/GenBank as well as predicted the structure, composition, subcellular localization, similarity, and phylogenetic. The physical and chemical properties of eight mangroves showed variation among the genes. The percentage of the secondary structure of eight mangrove actin genes followed the order of a helix > random coil > extended chain structure for BgActl, KcActl, RsActl, and A. corniculatum Act. In contrast to this observation, the remaining actin genes were random coil > extended chain structure > a helix. This study, therefore, shown the prediction of secondary structure was performed for necessary structural information. The values of chloroplast or signal peptide or mitochondrial target were too small, indicated that no chloroplast or mitochondrial transit peptide or signal peptide of secretion pathway in mangrove actin genes. These results suggested the importance of understanding the diversity and functional of properties of the different amino acids in mangrove actin genes. To clarify the relationship among the mangrove actin gene, a phylogenetic tree was constructed. Three groups of mangrove actin genes were formed, the first group contains B. gymnorrhiza BgAct and R. stylosa RsActl. The second cluster which consists of 5 actin genes the largest group, and the last branch consist of one gene, B. sexagula Act. The present study, therefore, supported the previous results that plant actin genes form distinct clusters in the tree.

  17. Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.

    PubMed

    Heo, Go Eun; Kang, Keun Young; Song, Min; Lee, Jeong-Hoon

    2017-05-31

    Bioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure. In this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics. We analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented. The results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are

  18. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines.

    PubMed

    Cieślik, Marcin; Mura, Cameron

    2011-02-25

    Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage

  19. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    PubMed Central

    2011-01-01

    Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive

  20. The eBioKit, a stand-alone educational platform for bioinformatics.

    PubMed

    Hernández-de-Diego, Rafael; de Villiers, Etienne P; Klingström, Tomas; Gourlé, Hadrien; Conesa, Ana; Bongcam-Rudloff, Erik

    2017-09-01

    Bioinformatics skills have become essential for many research areas; however, the availability of qualified researchers is usually lower than the demand and training to increase the number of able bioinformaticians is an important task for the bioinformatics community. When conducting training or hands-on tutorials, the lack of control over the analysis tools and repositories often results in undesirable situations during training, as unavailable online tools or version conflicts may delay, complicate, or even prevent the successful completion of a training event. The eBioKit is a stand-alone educational platform that hosts numerous tools and databases for bioinformatics research and allows training to take place in a controlled environment. A key advantage of the eBioKit over other existing teaching solutions is that all the required software and databases are locally installed on the system, significantly reducing the dependence on the internet. Furthermore, the architecture of the eBioKit has demonstrated itself to be an excellent balance between portability and performance, not only making the eBioKit an exceptional educational tool but also providing small research groups with a platform to incorporate bioinformatics analysis in their research. As a result, the eBioKit has formed an integral part of training and research performed by a wide variety of universities and organizations such as the Pan African Bioinformatics Network (H3ABioNet) as part of the initiative Human Heredity and Health in Africa (H3Africa), the Southern Africa Network for Biosciences (SAnBio) initiative, the Biosciences eastern and central Africa (BecA) hub, and the International Glossina Genome Initiative.